00:00:00.001 Started by upstream project "autotest-spdk-v24.05-vs-dpdk-v22.11" build number 120 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3298 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.093 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.093 The recommended git tool is: git 00:00:00.094 using credential 00000000-0000-0000-0000-000000000002 00:00:00.096 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.135 Fetching changes from the remote Git repository 00:00:00.137 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.169 Using shallow fetch with depth 1 00:00:00.169 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.169 > git --version # timeout=10 00:00:00.196 > git --version # 'git version 2.39.2' 00:00:00.196 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.217 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.217 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.735 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.745 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.755 Checking out Revision 4313f32deecbb7108199ebd1913b403a3005dece (FETCH_HEAD) 00:00:05.755 > git config core.sparsecheckout # timeout=10 00:00:05.767 > git read-tree -mu HEAD # timeout=10 00:00:05.784 > git checkout -f 4313f32deecbb7108199ebd1913b403a3005dece # timeout=5 00:00:05.804 Commit message: "packer: Add bios builder" 00:00:05.804 > git rev-list --no-walk 4313f32deecbb7108199ebd1913b403a3005dece # timeout=10 00:00:05.934 [Pipeline] Start of Pipeline 00:00:05.952 [Pipeline] library 00:00:05.954 Loading library shm_lib@master 00:00:05.954 Library shm_lib@master is cached. Copying from home. 00:00:05.971 [Pipeline] node 00:00:05.979 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.981 [Pipeline] { 00:00:05.996 [Pipeline] catchError 00:00:05.998 [Pipeline] { 00:00:06.012 [Pipeline] wrap 00:00:06.023 [Pipeline] { 00:00:06.031 [Pipeline] stage 00:00:06.033 [Pipeline] { (Prologue) 00:00:06.203 [Pipeline] sh 00:00:06.484 + logger -p user.info -t JENKINS-CI 00:00:06.500 [Pipeline] echo 00:00:06.502 Node: GP11 00:00:06.507 [Pipeline] sh 00:00:06.802 [Pipeline] setCustomBuildProperty 00:00:06.813 [Pipeline] echo 00:00:06.814 Cleanup processes 00:00:06.817 [Pipeline] sh 00:00:07.097 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.097 3301425 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.111 [Pipeline] sh 00:00:07.397 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.397 ++ grep -v 'sudo pgrep' 00:00:07.397 ++ awk '{print $1}' 00:00:07.397 + sudo kill -9 00:00:07.397 + true 00:00:07.415 [Pipeline] cleanWs 00:00:07.426 [WS-CLEANUP] Deleting project workspace... 00:00:07.426 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.433 [WS-CLEANUP] done 00:00:07.438 [Pipeline] setCustomBuildProperty 00:00:07.453 [Pipeline] sh 00:00:07.738 + sudo git config --global --replace-all safe.directory '*' 00:00:07.830 [Pipeline] httpRequest 00:00:07.867 [Pipeline] echo 00:00:07.869 Sorcerer 10.211.164.101 is alive 00:00:07.877 [Pipeline] httpRequest 00:00:07.882 HttpMethod: GET 00:00:07.883 URL: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:07.884 Sending request to url: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:07.902 Response Code: HTTP/1.1 200 OK 00:00:07.903 Success: Status code 200 is in the accepted range: 200,404 00:00:07.903 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:13.384 [Pipeline] sh 00:00:13.663 + tar --no-same-owner -xf jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:13.678 [Pipeline] httpRequest 00:00:13.710 [Pipeline] echo 00:00:13.712 Sorcerer 10.211.164.101 is alive 00:00:13.720 [Pipeline] httpRequest 00:00:13.724 HttpMethod: GET 00:00:13.725 URL: http://10.211.164.101/packages/spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:00:13.725 Sending request to url: http://10.211.164.101/packages/spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:00:13.746 Response Code: HTTP/1.1 200 OK 00:00:13.747 Success: Status code 200 is in the accepted range: 200,404 00:00:13.747 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:01:29.581 [Pipeline] sh 00:01:29.869 + tar --no-same-owner -xf spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:01:33.167 [Pipeline] sh 00:01:33.452 + git -C spdk log --oneline -n5 00:01:33.452 241d0f3c9 test: fix dpdk builds on ubuntu24 00:01:33.452 327de4622 test/bdev: Skip "hidden" nvme devices from the sysfs 00:01:33.452 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:01:33.452 330a4f94d nvme: check pthread_mutex_destroy() return value 00:01:33.452 7b72c3ced nvme: add nvme_ctrlr_lock 00:01:33.471 [Pipeline] withCredentials 00:01:33.483 > git --version # timeout=10 00:01:33.497 > git --version # 'git version 2.39.2' 00:01:33.515 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:33.517 [Pipeline] { 00:01:33.524 [Pipeline] retry 00:01:33.526 [Pipeline] { 00:01:33.541 [Pipeline] sh 00:01:33.826 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:34.101 [Pipeline] } 00:01:34.124 [Pipeline] // retry 00:01:34.130 [Pipeline] } 00:01:34.151 [Pipeline] // withCredentials 00:01:34.161 [Pipeline] httpRequest 00:01:34.179 [Pipeline] echo 00:01:34.181 Sorcerer 10.211.164.101 is alive 00:01:34.190 [Pipeline] httpRequest 00:01:34.195 HttpMethod: GET 00:01:34.196 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:34.196 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:34.199 Response Code: HTTP/1.1 200 OK 00:01:34.199 Success: Status code 200 is in the accepted range: 200,404 00:01:34.200 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:37.750 [Pipeline] sh 00:01:38.031 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:39.944 [Pipeline] sh 00:01:40.228 + git -C dpdk log --oneline -n5 00:01:40.228 caf0f5d395 version: 22.11.4 00:01:40.228 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:40.228 dc9c799c7d vhost: fix missing spinlock unlock 00:01:40.228 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:40.228 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:40.240 [Pipeline] } 00:01:40.257 [Pipeline] // stage 00:01:40.267 [Pipeline] stage 00:01:40.270 [Pipeline] { (Prepare) 00:01:40.292 [Pipeline] writeFile 00:01:40.310 [Pipeline] sh 00:01:40.594 + logger -p user.info -t JENKINS-CI 00:01:40.608 [Pipeline] sh 00:01:40.904 + logger -p user.info -t JENKINS-CI 00:01:40.919 [Pipeline] sh 00:01:41.196 + cat autorun-spdk.conf 00:01:41.196 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.196 SPDK_TEST_NVMF=1 00:01:41.196 SPDK_TEST_NVME_CLI=1 00:01:41.196 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:41.196 SPDK_TEST_NVMF_NICS=e810 00:01:41.196 SPDK_TEST_VFIOUSER=1 00:01:41.196 SPDK_RUN_UBSAN=1 00:01:41.196 NET_TYPE=phy 00:01:41.196 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:41.196 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:41.204 RUN_NIGHTLY=1 00:01:41.208 [Pipeline] readFile 00:01:41.235 [Pipeline] withEnv 00:01:41.237 [Pipeline] { 00:01:41.252 [Pipeline] sh 00:01:41.537 + set -ex 00:01:41.537 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:41.537 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:41.537 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.537 ++ SPDK_TEST_NVMF=1 00:01:41.537 ++ SPDK_TEST_NVME_CLI=1 00:01:41.537 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:41.537 ++ SPDK_TEST_NVMF_NICS=e810 00:01:41.537 ++ SPDK_TEST_VFIOUSER=1 00:01:41.537 ++ SPDK_RUN_UBSAN=1 00:01:41.537 ++ NET_TYPE=phy 00:01:41.537 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:41.537 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:41.537 ++ RUN_NIGHTLY=1 00:01:41.537 + case $SPDK_TEST_NVMF_NICS in 00:01:41.537 + DRIVERS=ice 00:01:41.537 + [[ tcp == \r\d\m\a ]] 00:01:41.537 + [[ -n ice ]] 00:01:41.537 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:41.537 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:41.537 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:41.537 rmmod: ERROR: Module irdma is not currently loaded 00:01:41.537 rmmod: ERROR: Module i40iw is not currently loaded 00:01:41.537 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:41.537 + true 00:01:41.537 + for D in $DRIVERS 00:01:41.537 + sudo modprobe ice 00:01:41.537 + exit 0 00:01:41.546 [Pipeline] } 00:01:41.564 [Pipeline] // withEnv 00:01:41.569 [Pipeline] } 00:01:41.586 [Pipeline] // stage 00:01:41.596 [Pipeline] catchError 00:01:41.598 [Pipeline] { 00:01:41.613 [Pipeline] timeout 00:01:41.614 Timeout set to expire in 50 min 00:01:41.616 [Pipeline] { 00:01:41.632 [Pipeline] stage 00:01:41.634 [Pipeline] { (Tests) 00:01:41.650 [Pipeline] sh 00:01:41.935 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:41.935 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:41.935 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:41.935 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:41.935 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:41.935 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:41.935 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:41.935 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:41.935 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:41.935 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:41.935 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:41.935 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:41.935 + source /etc/os-release 00:01:41.935 ++ NAME='Fedora Linux' 00:01:41.935 ++ VERSION='38 (Cloud Edition)' 00:01:41.935 ++ ID=fedora 00:01:41.935 ++ VERSION_ID=38 00:01:41.935 ++ VERSION_CODENAME= 00:01:41.935 ++ PLATFORM_ID=platform:f38 00:01:41.935 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:41.935 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:41.935 ++ LOGO=fedora-logo-icon 00:01:41.935 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:41.935 ++ HOME_URL=https://fedoraproject.org/ 00:01:41.935 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:41.935 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:41.935 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:41.935 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:41.935 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:41.935 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:41.935 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:41.935 ++ SUPPORT_END=2024-05-14 00:01:41.935 ++ VARIANT='Cloud Edition' 00:01:41.935 ++ VARIANT_ID=cloud 00:01:41.935 + uname -a 00:01:41.935 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:41.935 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:42.873 Hugepages 00:01:42.873 node hugesize free / total 00:01:42.873 node0 1048576kB 0 / 0 00:01:42.873 node0 2048kB 0 / 0 00:01:42.873 node1 1048576kB 0 / 0 00:01:42.873 node1 2048kB 0 / 0 00:01:42.873 00:01:42.873 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:42.873 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:42.873 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:42.873 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:42.873 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:42.873 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:42.873 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:42.873 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:42.873 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:42.873 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:42.873 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:42.873 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:42.873 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:42.873 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:42.873 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:42.873 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:42.873 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:42.873 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:42.873 + rm -f /tmp/spdk-ld-path 00:01:42.873 + source autorun-spdk.conf 00:01:42.873 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:42.873 ++ SPDK_TEST_NVMF=1 00:01:42.873 ++ SPDK_TEST_NVME_CLI=1 00:01:42.873 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:42.873 ++ SPDK_TEST_NVMF_NICS=e810 00:01:42.873 ++ SPDK_TEST_VFIOUSER=1 00:01:42.873 ++ SPDK_RUN_UBSAN=1 00:01:42.873 ++ NET_TYPE=phy 00:01:42.873 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:42.873 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:42.873 ++ RUN_NIGHTLY=1 00:01:42.873 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:42.873 + [[ -n '' ]] 00:01:42.873 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:42.873 + for M in /var/spdk/build-*-manifest.txt 00:01:42.873 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:42.873 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:42.873 + for M in /var/spdk/build-*-manifest.txt 00:01:42.873 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:42.873 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:42.873 ++ uname 00:01:42.873 + [[ Linux == \L\i\n\u\x ]] 00:01:42.873 + sudo dmesg -T 00:01:42.873 + sudo dmesg --clear 00:01:42.873 + dmesg_pid=3302749 00:01:42.873 + [[ Fedora Linux == FreeBSD ]] 00:01:42.873 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:42.873 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:42.873 + sudo dmesg -Tw 00:01:42.873 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:42.873 + [[ -x /usr/src/fio-static/fio ]] 00:01:42.873 + export FIO_BIN=/usr/src/fio-static/fio 00:01:42.873 + FIO_BIN=/usr/src/fio-static/fio 00:01:42.873 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:42.873 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:42.873 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:42.873 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:42.873 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:42.873 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:42.873 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:42.873 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:42.873 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:42.873 Test configuration: 00:01:42.873 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:42.873 SPDK_TEST_NVMF=1 00:01:42.873 SPDK_TEST_NVME_CLI=1 00:01:42.873 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:42.873 SPDK_TEST_NVMF_NICS=e810 00:01:42.873 SPDK_TEST_VFIOUSER=1 00:01:42.873 SPDK_RUN_UBSAN=1 00:01:42.873 NET_TYPE=phy 00:01:42.873 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:42.873 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:43.131 RUN_NIGHTLY=1 22:31:35 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:43.131 22:31:35 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:43.131 22:31:35 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:43.131 22:31:35 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:43.131 22:31:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.131 22:31:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.131 22:31:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.131 22:31:35 -- paths/export.sh@5 -- $ export PATH 00:01:43.132 22:31:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.132 22:31:35 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:43.132 22:31:35 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:43.132 22:31:35 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1722025895.XXXXXX 00:01:43.132 22:31:35 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1722025895.K3UU4h 00:01:43.132 22:31:35 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:43.132 22:31:35 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:01:43.132 22:31:35 -- common/autobuild_common.sh@447 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:43.132 22:31:35 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:43.132 22:31:35 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:43.132 22:31:35 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:43.132 22:31:35 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:43.132 22:31:35 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:01:43.132 22:31:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:43.132 22:31:35 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:43.132 22:31:35 -- common/autobuild_common.sh@458 -- $ start_monitor_resources 00:01:43.132 22:31:35 -- pm/common@17 -- $ local monitor 00:01:43.132 22:31:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.132 22:31:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.132 22:31:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.132 22:31:35 -- pm/common@21 -- $ date +%s 00:01:43.132 22:31:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.132 22:31:35 -- pm/common@21 -- $ date +%s 00:01:43.132 22:31:35 -- pm/common@25 -- $ sleep 1 00:01:43.132 22:31:35 -- pm/common@21 -- $ date +%s 00:01:43.132 22:31:35 -- pm/common@21 -- $ date +%s 00:01:43.132 22:31:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1722025895 00:01:43.132 22:31:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1722025895 00:01:43.132 22:31:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1722025895 00:01:43.132 22:31:35 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1722025895 00:01:43.132 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1722025895_collect-vmstat.pm.log 00:01:43.132 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1722025895_collect-cpu-load.pm.log 00:01:43.132 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1722025895_collect-cpu-temp.pm.log 00:01:43.132 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1722025895_collect-bmc-pm.bmc.pm.log 00:01:44.070 22:31:36 -- common/autobuild_common.sh@459 -- $ trap stop_monitor_resources EXIT 00:01:44.070 22:31:36 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:44.070 22:31:36 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:44.070 22:31:36 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:44.070 22:31:36 -- spdk/autobuild.sh@16 -- $ date -u 00:01:44.070 Fri Jul 26 08:31:36 PM UTC 2024 00:01:44.070 22:31:36 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:44.070 v24.05-15-g241d0f3c9 00:01:44.070 22:31:36 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:44.070 22:31:36 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:44.070 22:31:36 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:44.070 22:31:36 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:44.070 22:31:36 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:44.070 22:31:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:44.070 ************************************ 00:01:44.070 START TEST ubsan 00:01:44.070 ************************************ 00:01:44.070 22:31:36 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:01:44.070 using ubsan 00:01:44.070 00:01:44.070 real 0m0.000s 00:01:44.070 user 0m0.000s 00:01:44.070 sys 0m0.000s 00:01:44.070 22:31:36 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:44.070 22:31:36 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:44.070 ************************************ 00:01:44.070 END TEST ubsan 00:01:44.070 ************************************ 00:01:44.070 22:31:36 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:44.070 22:31:36 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:44.070 22:31:36 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:44.070 22:31:36 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:01:44.070 22:31:36 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:44.070 22:31:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:44.070 ************************************ 00:01:44.070 START TEST build_native_dpdk 00:01:44.070 ************************************ 00:01:44.070 22:31:36 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:44.070 caf0f5d395 version: 22.11.4 00:01:44.070 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:44.070 dc9c799c7d vhost: fix missing spinlock unlock 00:01:44.070 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:44.070 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:44.070 22:31:36 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:44.070 22:31:36 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:44.070 22:31:36 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:44.070 22:31:36 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:44.070 22:31:36 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:44.070 22:31:36 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:44.070 22:31:36 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:44.070 22:31:36 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:44.070 22:31:36 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:44.070 22:31:36 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:44.070 22:31:36 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:44.070 22:31:36 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:44.070 22:31:36 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:44.070 22:31:36 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:44.070 22:31:36 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:44.070 22:31:36 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:01:44.070 22:31:36 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:01:44.070 22:31:36 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:44.070 22:31:36 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:01:44.070 22:31:36 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:01:44.070 22:31:36 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:44.070 22:31:36 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:44.070 22:31:36 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:44.070 22:31:36 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:44.070 22:31:36 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:44.070 22:31:36 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:44.070 22:31:36 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:44.070 patching file config/rte_config.h 00:01:44.070 Hunk #1 succeeded at 60 (offset 1 line). 00:01:44.070 22:31:36 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:01:44.070 22:31:36 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:01:44.071 22:31:36 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:44.071 22:31:36 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:44.071 22:31:36 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:44.071 22:31:36 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:44.071 22:31:36 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:44.071 22:31:36 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:44.071 22:31:36 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:44.071 22:31:36 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:44.071 22:31:36 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:44.071 22:31:36 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:44.071 22:31:36 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:44.071 22:31:36 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:44.071 22:31:36 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:44.071 22:31:36 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:44.071 22:31:36 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:01:44.071 22:31:36 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:01:44.071 22:31:36 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:44.071 22:31:36 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:01:44.071 22:31:36 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:01:44.071 22:31:36 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:01:44.071 22:31:36 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:44.071 22:31:36 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:44.071 22:31:36 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:44.071 22:31:36 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:01:44.071 22:31:36 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:44.071 22:31:36 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:44.071 22:31:36 build_native_dpdk -- scripts/common.sh@365 -- $ return 0 00:01:44.071 22:31:36 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:44.071 patching file lib/pcapng/rte_pcapng.c 00:01:44.071 Hunk #1 succeeded at 110 (offset -18 lines). 00:01:44.071 22:31:36 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:44.071 22:31:36 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:01:44.071 22:31:36 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:44.071 22:31:36 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:44.071 22:31:36 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:48.270 The Meson build system 00:01:48.270 Version: 1.3.1 00:01:48.270 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:48.270 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:48.270 Build type: native build 00:01:48.270 Program cat found: YES (/usr/bin/cat) 00:01:48.270 Project name: DPDK 00:01:48.270 Project version: 22.11.4 00:01:48.270 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:48.270 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:48.270 Host machine cpu family: x86_64 00:01:48.270 Host machine cpu: x86_64 00:01:48.270 Message: ## Building in Developer Mode ## 00:01:48.270 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:48.270 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:48.270 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:48.270 Program objdump found: YES (/usr/bin/objdump) 00:01:48.270 Program python3 found: YES (/usr/bin/python3) 00:01:48.270 Program cat found: YES (/usr/bin/cat) 00:01:48.270 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:48.270 Checking for size of "void *" : 8 00:01:48.270 Checking for size of "void *" : 8 (cached) 00:01:48.270 Library m found: YES 00:01:48.270 Library numa found: YES 00:01:48.270 Has header "numaif.h" : YES 00:01:48.270 Library fdt found: NO 00:01:48.270 Library execinfo found: NO 00:01:48.270 Has header "execinfo.h" : YES 00:01:48.270 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:48.270 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:48.270 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:48.270 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:48.270 Run-time dependency openssl found: YES 3.0.9 00:01:48.270 Run-time dependency libpcap found: YES 1.10.4 00:01:48.270 Has header "pcap.h" with dependency libpcap: YES 00:01:48.270 Compiler for C supports arguments -Wcast-qual: YES 00:01:48.270 Compiler for C supports arguments -Wdeprecated: YES 00:01:48.270 Compiler for C supports arguments -Wformat: YES 00:01:48.270 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:48.270 Compiler for C supports arguments -Wformat-security: NO 00:01:48.270 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:48.270 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:48.270 Compiler for C supports arguments -Wnested-externs: YES 00:01:48.270 Compiler for C supports arguments -Wold-style-definition: YES 00:01:48.270 Compiler for C supports arguments -Wpointer-arith: YES 00:01:48.270 Compiler for C supports arguments -Wsign-compare: YES 00:01:48.270 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:48.270 Compiler for C supports arguments -Wundef: YES 00:01:48.270 Compiler for C supports arguments -Wwrite-strings: YES 00:01:48.270 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:48.270 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:48.270 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:48.270 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:48.270 Compiler for C supports arguments -mavx512f: YES 00:01:48.270 Checking if "AVX512 checking" compiles: YES 00:01:48.270 Fetching value of define "__SSE4_2__" : 1 00:01:48.270 Fetching value of define "__AES__" : 1 00:01:48.270 Fetching value of define "__AVX__" : 1 00:01:48.270 Fetching value of define "__AVX2__" : (undefined) 00:01:48.270 Fetching value of define "__AVX512BW__" : (undefined) 00:01:48.270 Fetching value of define "__AVX512CD__" : (undefined) 00:01:48.270 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:48.270 Fetching value of define "__AVX512F__" : (undefined) 00:01:48.270 Fetching value of define "__AVX512VL__" : (undefined) 00:01:48.270 Fetching value of define "__PCLMUL__" : 1 00:01:48.270 Fetching value of define "__RDRND__" : 1 00:01:48.270 Fetching value of define "__RDSEED__" : (undefined) 00:01:48.270 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:48.270 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:48.270 Message: lib/kvargs: Defining dependency "kvargs" 00:01:48.270 Message: lib/telemetry: Defining dependency "telemetry" 00:01:48.270 Checking for function "getentropy" : YES 00:01:48.270 Message: lib/eal: Defining dependency "eal" 00:01:48.270 Message: lib/ring: Defining dependency "ring" 00:01:48.270 Message: lib/rcu: Defining dependency "rcu" 00:01:48.270 Message: lib/mempool: Defining dependency "mempool" 00:01:48.270 Message: lib/mbuf: Defining dependency "mbuf" 00:01:48.270 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:48.270 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:48.270 Compiler for C supports arguments -mpclmul: YES 00:01:48.270 Compiler for C supports arguments -maes: YES 00:01:48.270 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:48.270 Compiler for C supports arguments -mavx512bw: YES 00:01:48.270 Compiler for C supports arguments -mavx512dq: YES 00:01:48.270 Compiler for C supports arguments -mavx512vl: YES 00:01:48.270 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:48.270 Compiler for C supports arguments -mavx2: YES 00:01:48.270 Compiler for C supports arguments -mavx: YES 00:01:48.270 Message: lib/net: Defining dependency "net" 00:01:48.270 Message: lib/meter: Defining dependency "meter" 00:01:48.270 Message: lib/ethdev: Defining dependency "ethdev" 00:01:48.270 Message: lib/pci: Defining dependency "pci" 00:01:48.270 Message: lib/cmdline: Defining dependency "cmdline" 00:01:48.270 Message: lib/metrics: Defining dependency "metrics" 00:01:48.270 Message: lib/hash: Defining dependency "hash" 00:01:48.270 Message: lib/timer: Defining dependency "timer" 00:01:48.270 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:48.270 Compiler for C supports arguments -mavx2: YES (cached) 00:01:48.270 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:48.270 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:48.270 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:48.270 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:48.270 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:48.270 Message: lib/acl: Defining dependency "acl" 00:01:48.270 Message: lib/bbdev: Defining dependency "bbdev" 00:01:48.270 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:48.270 Run-time dependency libelf found: YES 0.190 00:01:48.270 Message: lib/bpf: Defining dependency "bpf" 00:01:48.270 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:48.270 Message: lib/compressdev: Defining dependency "compressdev" 00:01:48.270 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:48.270 Message: lib/distributor: Defining dependency "distributor" 00:01:48.270 Message: lib/efd: Defining dependency "efd" 00:01:48.270 Message: lib/eventdev: Defining dependency "eventdev" 00:01:48.270 Message: lib/gpudev: Defining dependency "gpudev" 00:01:48.270 Message: lib/gro: Defining dependency "gro" 00:01:48.270 Message: lib/gso: Defining dependency "gso" 00:01:48.270 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:48.270 Message: lib/jobstats: Defining dependency "jobstats" 00:01:48.270 Message: lib/latencystats: Defining dependency "latencystats" 00:01:48.270 Message: lib/lpm: Defining dependency "lpm" 00:01:48.270 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:48.270 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:48.270 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:48.270 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:48.270 Message: lib/member: Defining dependency "member" 00:01:48.270 Message: lib/pcapng: Defining dependency "pcapng" 00:01:48.270 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:48.270 Message: lib/power: Defining dependency "power" 00:01:48.270 Message: lib/rawdev: Defining dependency "rawdev" 00:01:48.270 Message: lib/regexdev: Defining dependency "regexdev" 00:01:48.270 Message: lib/dmadev: Defining dependency "dmadev" 00:01:48.270 Message: lib/rib: Defining dependency "rib" 00:01:48.270 Message: lib/reorder: Defining dependency "reorder" 00:01:48.270 Message: lib/sched: Defining dependency "sched" 00:01:48.270 Message: lib/security: Defining dependency "security" 00:01:48.270 Message: lib/stack: Defining dependency "stack" 00:01:48.270 Has header "linux/userfaultfd.h" : YES 00:01:48.270 Message: lib/vhost: Defining dependency "vhost" 00:01:48.270 Message: lib/ipsec: Defining dependency "ipsec" 00:01:48.270 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:48.270 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:48.270 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:48.270 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:48.270 Message: lib/fib: Defining dependency "fib" 00:01:48.270 Message: lib/port: Defining dependency "port" 00:01:48.270 Message: lib/pdump: Defining dependency "pdump" 00:01:48.270 Message: lib/table: Defining dependency "table" 00:01:48.270 Message: lib/pipeline: Defining dependency "pipeline" 00:01:48.270 Message: lib/graph: Defining dependency "graph" 00:01:48.270 Message: lib/node: Defining dependency "node" 00:01:48.270 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:48.270 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:48.270 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:48.270 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:48.270 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:48.270 Compiler for C supports arguments -Wno-unused-value: YES 00:01:49.210 Compiler for C supports arguments -Wno-format: YES 00:01:49.210 Compiler for C supports arguments -Wno-format-security: YES 00:01:49.210 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:49.210 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:49.210 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:49.210 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:49.210 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:49.210 Compiler for C supports arguments -mavx2: YES (cached) 00:01:49.210 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:49.210 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:49.210 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:49.210 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:49.210 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:49.210 Program doxygen found: YES (/usr/bin/doxygen) 00:01:49.210 Configuring doxy-api.conf using configuration 00:01:49.210 Program sphinx-build found: NO 00:01:49.210 Configuring rte_build_config.h using configuration 00:01:49.210 Message: 00:01:49.210 ================= 00:01:49.210 Applications Enabled 00:01:49.210 ================= 00:01:49.210 00:01:49.210 apps: 00:01:49.210 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:49.210 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:49.210 test-security-perf, 00:01:49.210 00:01:49.210 Message: 00:01:49.210 ================= 00:01:49.210 Libraries Enabled 00:01:49.210 ================= 00:01:49.210 00:01:49.210 libs: 00:01:49.210 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:49.210 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:49.210 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:49.210 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:49.210 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:49.210 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:49.210 table, pipeline, graph, node, 00:01:49.210 00:01:49.210 Message: 00:01:49.210 =============== 00:01:49.210 Drivers Enabled 00:01:49.210 =============== 00:01:49.210 00:01:49.210 common: 00:01:49.210 00:01:49.210 bus: 00:01:49.210 pci, vdev, 00:01:49.210 mempool: 00:01:49.210 ring, 00:01:49.210 dma: 00:01:49.210 00:01:49.210 net: 00:01:49.210 i40e, 00:01:49.210 raw: 00:01:49.210 00:01:49.210 crypto: 00:01:49.210 00:01:49.210 compress: 00:01:49.210 00:01:49.210 regex: 00:01:49.210 00:01:49.210 vdpa: 00:01:49.210 00:01:49.210 event: 00:01:49.210 00:01:49.210 baseband: 00:01:49.210 00:01:49.210 gpu: 00:01:49.210 00:01:49.210 00:01:49.210 Message: 00:01:49.210 ================= 00:01:49.210 Content Skipped 00:01:49.210 ================= 00:01:49.210 00:01:49.210 apps: 00:01:49.210 00:01:49.210 libs: 00:01:49.210 kni: explicitly disabled via build config (deprecated lib) 00:01:49.210 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:49.210 00:01:49.210 drivers: 00:01:49.210 common/cpt: not in enabled drivers build config 00:01:49.210 common/dpaax: not in enabled drivers build config 00:01:49.210 common/iavf: not in enabled drivers build config 00:01:49.210 common/idpf: not in enabled drivers build config 00:01:49.210 common/mvep: not in enabled drivers build config 00:01:49.210 common/octeontx: not in enabled drivers build config 00:01:49.210 bus/auxiliary: not in enabled drivers build config 00:01:49.210 bus/dpaa: not in enabled drivers build config 00:01:49.210 bus/fslmc: not in enabled drivers build config 00:01:49.211 bus/ifpga: not in enabled drivers build config 00:01:49.211 bus/vmbus: not in enabled drivers build config 00:01:49.211 common/cnxk: not in enabled drivers build config 00:01:49.211 common/mlx5: not in enabled drivers build config 00:01:49.211 common/qat: not in enabled drivers build config 00:01:49.211 common/sfc_efx: not in enabled drivers build config 00:01:49.211 mempool/bucket: not in enabled drivers build config 00:01:49.211 mempool/cnxk: not in enabled drivers build config 00:01:49.211 mempool/dpaa: not in enabled drivers build config 00:01:49.211 mempool/dpaa2: not in enabled drivers build config 00:01:49.211 mempool/octeontx: not in enabled drivers build config 00:01:49.211 mempool/stack: not in enabled drivers build config 00:01:49.211 dma/cnxk: not in enabled drivers build config 00:01:49.211 dma/dpaa: not in enabled drivers build config 00:01:49.211 dma/dpaa2: not in enabled drivers build config 00:01:49.211 dma/hisilicon: not in enabled drivers build config 00:01:49.211 dma/idxd: not in enabled drivers build config 00:01:49.211 dma/ioat: not in enabled drivers build config 00:01:49.211 dma/skeleton: not in enabled drivers build config 00:01:49.211 net/af_packet: not in enabled drivers build config 00:01:49.211 net/af_xdp: not in enabled drivers build config 00:01:49.211 net/ark: not in enabled drivers build config 00:01:49.211 net/atlantic: not in enabled drivers build config 00:01:49.211 net/avp: not in enabled drivers build config 00:01:49.211 net/axgbe: not in enabled drivers build config 00:01:49.211 net/bnx2x: not in enabled drivers build config 00:01:49.211 net/bnxt: not in enabled drivers build config 00:01:49.211 net/bonding: not in enabled drivers build config 00:01:49.211 net/cnxk: not in enabled drivers build config 00:01:49.211 net/cxgbe: not in enabled drivers build config 00:01:49.211 net/dpaa: not in enabled drivers build config 00:01:49.211 net/dpaa2: not in enabled drivers build config 00:01:49.211 net/e1000: not in enabled drivers build config 00:01:49.211 net/ena: not in enabled drivers build config 00:01:49.211 net/enetc: not in enabled drivers build config 00:01:49.211 net/enetfec: not in enabled drivers build config 00:01:49.211 net/enic: not in enabled drivers build config 00:01:49.211 net/failsafe: not in enabled drivers build config 00:01:49.211 net/fm10k: not in enabled drivers build config 00:01:49.211 net/gve: not in enabled drivers build config 00:01:49.211 net/hinic: not in enabled drivers build config 00:01:49.211 net/hns3: not in enabled drivers build config 00:01:49.211 net/iavf: not in enabled drivers build config 00:01:49.211 net/ice: not in enabled drivers build config 00:01:49.211 net/idpf: not in enabled drivers build config 00:01:49.211 net/igc: not in enabled drivers build config 00:01:49.211 net/ionic: not in enabled drivers build config 00:01:49.211 net/ipn3ke: not in enabled drivers build config 00:01:49.211 net/ixgbe: not in enabled drivers build config 00:01:49.211 net/kni: not in enabled drivers build config 00:01:49.211 net/liquidio: not in enabled drivers build config 00:01:49.211 net/mana: not in enabled drivers build config 00:01:49.211 net/memif: not in enabled drivers build config 00:01:49.211 net/mlx4: not in enabled drivers build config 00:01:49.211 net/mlx5: not in enabled drivers build config 00:01:49.211 net/mvneta: not in enabled drivers build config 00:01:49.211 net/mvpp2: not in enabled drivers build config 00:01:49.211 net/netvsc: not in enabled drivers build config 00:01:49.211 net/nfb: not in enabled drivers build config 00:01:49.211 net/nfp: not in enabled drivers build config 00:01:49.211 net/ngbe: not in enabled drivers build config 00:01:49.211 net/null: not in enabled drivers build config 00:01:49.211 net/octeontx: not in enabled drivers build config 00:01:49.211 net/octeon_ep: not in enabled drivers build config 00:01:49.211 net/pcap: not in enabled drivers build config 00:01:49.211 net/pfe: not in enabled drivers build config 00:01:49.211 net/qede: not in enabled drivers build config 00:01:49.211 net/ring: not in enabled drivers build config 00:01:49.211 net/sfc: not in enabled drivers build config 00:01:49.211 net/softnic: not in enabled drivers build config 00:01:49.211 net/tap: not in enabled drivers build config 00:01:49.211 net/thunderx: not in enabled drivers build config 00:01:49.211 net/txgbe: not in enabled drivers build config 00:01:49.211 net/vdev_netvsc: not in enabled drivers build config 00:01:49.211 net/vhost: not in enabled drivers build config 00:01:49.211 net/virtio: not in enabled drivers build config 00:01:49.211 net/vmxnet3: not in enabled drivers build config 00:01:49.211 raw/cnxk_bphy: not in enabled drivers build config 00:01:49.211 raw/cnxk_gpio: not in enabled drivers build config 00:01:49.211 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:49.211 raw/ifpga: not in enabled drivers build config 00:01:49.211 raw/ntb: not in enabled drivers build config 00:01:49.211 raw/skeleton: not in enabled drivers build config 00:01:49.211 crypto/armv8: not in enabled drivers build config 00:01:49.211 crypto/bcmfs: not in enabled drivers build config 00:01:49.211 crypto/caam_jr: not in enabled drivers build config 00:01:49.211 crypto/ccp: not in enabled drivers build config 00:01:49.211 crypto/cnxk: not in enabled drivers build config 00:01:49.211 crypto/dpaa_sec: not in enabled drivers build config 00:01:49.211 crypto/dpaa2_sec: not in enabled drivers build config 00:01:49.211 crypto/ipsec_mb: not in enabled drivers build config 00:01:49.211 crypto/mlx5: not in enabled drivers build config 00:01:49.211 crypto/mvsam: not in enabled drivers build config 00:01:49.211 crypto/nitrox: not in enabled drivers build config 00:01:49.211 crypto/null: not in enabled drivers build config 00:01:49.211 crypto/octeontx: not in enabled drivers build config 00:01:49.211 crypto/openssl: not in enabled drivers build config 00:01:49.211 crypto/scheduler: not in enabled drivers build config 00:01:49.211 crypto/uadk: not in enabled drivers build config 00:01:49.211 crypto/virtio: not in enabled drivers build config 00:01:49.211 compress/isal: not in enabled drivers build config 00:01:49.211 compress/mlx5: not in enabled drivers build config 00:01:49.211 compress/octeontx: not in enabled drivers build config 00:01:49.211 compress/zlib: not in enabled drivers build config 00:01:49.211 regex/mlx5: not in enabled drivers build config 00:01:49.211 regex/cn9k: not in enabled drivers build config 00:01:49.211 vdpa/ifc: not in enabled drivers build config 00:01:49.211 vdpa/mlx5: not in enabled drivers build config 00:01:49.211 vdpa/sfc: not in enabled drivers build config 00:01:49.211 event/cnxk: not in enabled drivers build config 00:01:49.211 event/dlb2: not in enabled drivers build config 00:01:49.211 event/dpaa: not in enabled drivers build config 00:01:49.211 event/dpaa2: not in enabled drivers build config 00:01:49.211 event/dsw: not in enabled drivers build config 00:01:49.211 event/opdl: not in enabled drivers build config 00:01:49.211 event/skeleton: not in enabled drivers build config 00:01:49.211 event/sw: not in enabled drivers build config 00:01:49.211 event/octeontx: not in enabled drivers build config 00:01:49.211 baseband/acc: not in enabled drivers build config 00:01:49.211 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:49.211 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:49.211 baseband/la12xx: not in enabled drivers build config 00:01:49.211 baseband/null: not in enabled drivers build config 00:01:49.211 baseband/turbo_sw: not in enabled drivers build config 00:01:49.211 gpu/cuda: not in enabled drivers build config 00:01:49.211 00:01:49.211 00:01:49.211 Build targets in project: 316 00:01:49.211 00:01:49.211 DPDK 22.11.4 00:01:49.211 00:01:49.211 User defined options 00:01:49.211 libdir : lib 00:01:49.211 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:49.211 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:49.211 c_link_args : 00:01:49.211 enable_docs : false 00:01:49.211 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:49.211 enable_kmods : false 00:01:49.211 machine : native 00:01:49.211 tests : false 00:01:49.211 00:01:49.211 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:49.211 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:49.480 22:31:41 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:49.480 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:49.480 [1/745] Generating lib/rte_kvargs_def with a custom command 00:01:49.480 [2/745] Generating lib/rte_telemetry_mingw with a custom command 00:01:49.480 [3/745] Generating lib/rte_kvargs_mingw with a custom command 00:01:49.480 [4/745] Generating lib/rte_telemetry_def with a custom command 00:01:49.480 [5/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:49.480 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:49.480 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:49.480 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:49.480 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:49.480 [10/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:49.739 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:49.739 [12/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:49.739 [13/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:49.739 [14/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:49.739 [15/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:49.739 [16/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:49.739 [17/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:49.739 [18/745] Linking static target lib/librte_kvargs.a 00:01:49.739 [19/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:49.739 [20/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:49.739 [21/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:49.739 [22/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:49.739 [23/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:49.739 [24/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:49.739 [25/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:49.739 [26/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:49.739 [27/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:49.739 [28/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:49.739 [29/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:49.739 [30/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:49.739 [31/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:49.739 [32/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:49.739 [33/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:49.739 [34/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:49.739 [35/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:49.739 [36/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:49.739 [37/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:49.739 [38/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:49.739 [39/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:49.739 [40/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:49.739 [41/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:49.739 [42/745] Generating lib/rte_eal_mingw with a custom command 00:01:49.739 [43/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:49.739 [44/745] Generating lib/rte_eal_def with a custom command 00:01:49.739 [45/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:49.739 [46/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:49.739 [47/745] Generating lib/rte_ring_def with a custom command 00:01:49.739 [48/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:49.739 [49/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:49.739 [50/745] Generating lib/rte_ring_mingw with a custom command 00:01:50.001 [51/745] Generating lib/rte_rcu_mingw with a custom command 00:01:50.001 [52/745] Generating lib/rte_rcu_def with a custom command 00:01:50.001 [53/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:50.001 [54/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:50.001 [55/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:50.001 [56/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:50.001 [57/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:50.001 [58/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:50.001 [59/745] Generating lib/rte_mempool_def with a custom command 00:01:50.001 [60/745] Generating lib/rte_mempool_mingw with a custom command 00:01:50.001 [61/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:50.001 [62/745] Generating lib/rte_mbuf_def with a custom command 00:01:50.001 [63/745] Generating lib/rte_mbuf_mingw with a custom command 00:01:50.001 [64/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:50.001 [65/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:50.001 [66/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:50.001 [67/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:50.001 [68/745] Generating lib/rte_net_def with a custom command 00:01:50.001 [69/745] Generating lib/rte_net_mingw with a custom command 00:01:50.001 [70/745] Generating lib/rte_meter_def with a custom command 00:01:50.001 [71/745] Generating lib/rte_meter_mingw with a custom command 00:01:50.001 [72/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:50.001 [73/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:50.001 [74/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:50.001 [75/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:50.001 [76/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:50.001 [77/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:50.001 [78/745] Generating lib/rte_ethdev_def with a custom command 00:01:50.264 [79/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:50.264 [80/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.264 [81/745] Linking static target lib/librte_ring.a 00:01:50.264 [82/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:50.264 [83/745] Generating lib/rte_ethdev_mingw with a custom command 00:01:50.264 [84/745] Linking target lib/librte_kvargs.so.23.0 00:01:50.264 [85/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:50.264 [86/745] Generating lib/rte_pci_def with a custom command 00:01:50.264 [87/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:50.264 [88/745] Linking static target lib/librte_meter.a 00:01:50.264 [89/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:50.264 [90/745] Generating lib/rte_pci_mingw with a custom command 00:01:50.264 [91/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:50.264 [92/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:50.264 [93/745] Linking static target lib/librte_pci.a 00:01:50.264 [94/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:50.529 [95/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:50.529 [96/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:50.529 [97/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:50.529 [98/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:50.529 [99/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.529 [100/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:50.529 [101/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:50.529 [102/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.529 [103/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:50.529 [104/745] Generating lib/rte_cmdline_def with a custom command 00:01:50.529 [105/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:50.529 [106/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:50.529 [107/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.796 [108/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:50.796 [109/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:50.796 [110/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:50.796 [111/745] Linking static target lib/librte_telemetry.a 00:01:50.796 [112/745] Generating lib/rte_cmdline_mingw with a custom command 00:01:50.796 [113/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:50.796 [114/745] Generating lib/rte_metrics_mingw with a custom command 00:01:50.796 [115/745] Generating lib/rte_metrics_def with a custom command 00:01:50.796 [116/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:50.796 [117/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:50.796 [118/745] Generating lib/rte_hash_def with a custom command 00:01:50.796 [119/745] Generating lib/rte_hash_mingw with a custom command 00:01:50.796 [120/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:50.796 [121/745] Generating lib/rte_timer_def with a custom command 00:01:50.796 [122/745] Generating lib/rte_timer_mingw with a custom command 00:01:51.063 [123/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:51.063 [124/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:51.063 [125/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:51.063 [126/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:51.063 [127/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:51.063 [128/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:51.063 [129/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:51.063 [130/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:51.063 [131/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:51.063 [132/745] Generating lib/rte_acl_def with a custom command 00:01:51.064 [133/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:51.064 [134/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:51.064 [135/745] Generating lib/rte_acl_mingw with a custom command 00:01:51.064 [136/745] Generating lib/rte_bbdev_def with a custom command 00:01:51.064 [137/745] Generating lib/rte_bbdev_mingw with a custom command 00:01:51.064 [138/745] Generating lib/rte_bitratestats_mingw with a custom command 00:01:51.064 [139/745] Generating lib/rte_bitratestats_def with a custom command 00:01:51.064 [140/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:51.064 [141/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.064 [142/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:51.322 [143/745] Linking target lib/librte_telemetry.so.23.0 00:01:51.322 [144/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:51.322 [145/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:51.322 [146/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:51.322 [147/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:51.322 [148/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:51.322 [149/745] Generating lib/rte_bpf_mingw with a custom command 00:01:51.322 [150/745] Generating lib/rte_bpf_def with a custom command 00:01:51.322 [151/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:51.322 [152/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:51.322 [153/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:51.322 [154/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:51.322 [155/745] Generating lib/rte_cfgfile_mingw with a custom command 00:01:51.322 [156/745] Generating lib/rte_cfgfile_def with a custom command 00:01:51.322 [157/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:51.584 [158/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:51.584 [159/745] Generating lib/rte_compressdev_def with a custom command 00:01:51.584 [160/745] Generating lib/rte_compressdev_mingw with a custom command 00:01:51.584 [161/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:51.584 [162/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:51.584 [163/745] Generating lib/rte_cryptodev_def with a custom command 00:01:51.584 [164/745] Generating lib/rte_cryptodev_mingw with a custom command 00:01:51.584 [165/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:51.584 [166/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:51.584 [167/745] Linking static target lib/librte_rcu.a 00:01:51.584 [168/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:51.584 [169/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:51.584 [170/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:51.584 [171/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:51.584 [172/745] Linking static target lib/librte_cmdline.a 00:01:51.584 [173/745] Linking static target lib/librte_timer.a 00:01:51.584 [174/745] Generating lib/rte_distributor_def with a custom command 00:01:51.584 [175/745] Generating lib/rte_distributor_mingw with a custom command 00:01:51.584 [176/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:51.584 [177/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:51.584 [178/745] Linking static target lib/librte_net.a 00:01:51.847 [179/745] Generating lib/rte_efd_mingw with a custom command 00:01:51.848 [180/745] Generating lib/rte_efd_def with a custom command 00:01:51.848 [181/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:51.848 [182/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:51.848 [183/745] Linking static target lib/librte_mempool.a 00:01:51.848 [184/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:51.848 [185/745] Linking static target lib/librte_cfgfile.a 00:01:51.848 [186/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:51.848 [187/745] Linking static target lib/librte_metrics.a 00:01:52.110 [188/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.110 [189/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.110 [190/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:52.110 [191/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:52.110 [192/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.110 [193/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:52.373 [194/745] Generating lib/rte_eventdev_def with a custom command 00:01:52.373 [195/745] Linking static target lib/librte_eal.a 00:01:52.373 [196/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:52.373 [197/745] Generating lib/rte_eventdev_mingw with a custom command 00:01:52.373 [198/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:52.373 [199/745] Generating lib/rte_gpudev_def with a custom command 00:01:52.373 [200/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:52.373 [201/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:52.373 [202/745] Generating lib/rte_gpudev_mingw with a custom command 00:01:52.373 [203/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:52.373 [204/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.373 [205/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:52.373 [206/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:52.373 [207/745] Linking static target lib/librte_bitratestats.a 00:01:52.373 [208/745] Generating lib/rte_gro_def with a custom command 00:01:52.638 [209/745] Generating lib/rte_gro_mingw with a custom command 00:01:52.638 [210/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.638 [211/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:52.638 [212/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:52.638 [213/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:52.638 [214/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:52.897 [215/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:52.897 [216/745] Generating lib/rte_gso_def with a custom command 00:01:52.897 [217/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.898 [218/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:52.898 [219/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:52.898 [220/745] Generating lib/rte_gso_mingw with a custom command 00:01:52.898 [221/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:52.898 [222/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:52.898 [223/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.161 [224/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:53.161 [225/745] Linking static target lib/librte_bbdev.a 00:01:53.161 [226/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:53.161 [227/745] Generating lib/rte_ip_frag_mingw with a custom command 00:01:53.161 [228/745] Generating lib/rte_ip_frag_def with a custom command 00:01:53.161 [229/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.161 [230/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:53.161 [231/745] Generating lib/rte_jobstats_def with a custom command 00:01:53.161 [232/745] Generating lib/rte_jobstats_mingw with a custom command 00:01:53.161 [233/745] Generating lib/rte_latencystats_def with a custom command 00:01:53.161 [234/745] Generating lib/rte_latencystats_mingw with a custom command 00:01:53.161 [235/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:53.161 [236/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:53.161 [237/745] Generating lib/rte_lpm_def with a custom command 00:01:53.161 [238/745] Linking static target lib/librte_compressdev.a 00:01:53.161 [239/745] Generating lib/rte_lpm_mingw with a custom command 00:01:53.161 [240/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:53.423 [241/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:53.423 [242/745] Linking static target lib/librte_jobstats.a 00:01:53.423 [243/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:53.687 [244/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:53.687 [245/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:53.687 [246/745] Linking static target lib/librte_distributor.a 00:01:53.687 [247/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:53.687 [248/745] Generating lib/rte_member_def with a custom command 00:01:53.687 [249/745] Generating lib/rte_member_mingw with a custom command 00:01:53.687 [250/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:53.687 [251/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.687 [252/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:53.953 [253/745] Generating lib/rte_pcapng_def with a custom command 00:01:53.953 [254/745] Generating lib/rte_pcapng_mingw with a custom command 00:01:53.953 [255/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:53.953 [256/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:53.953 [257/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.953 [258/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:53.953 [259/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:53.953 [260/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:53.953 [261/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:53.953 [262/745] Linking static target lib/librte_bpf.a 00:01:53.953 [263/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:53.953 [264/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:53.953 [265/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.953 [266/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:53.953 [267/745] Linking static target lib/librte_gpudev.a 00:01:53.953 [268/745] Generating lib/rte_power_def with a custom command 00:01:53.953 [269/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:53.953 [270/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:54.218 [271/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:54.218 [272/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:54.218 [273/745] Generating lib/rte_power_mingw with a custom command 00:01:54.218 [274/745] Linking static target lib/librte_gro.a 00:01:54.218 [275/745] Generating lib/rte_rawdev_def with a custom command 00:01:54.218 [276/745] Generating lib/rte_rawdev_mingw with a custom command 00:01:54.218 [277/745] Generating lib/rte_regexdev_def with a custom command 00:01:54.218 [278/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:54.218 [279/745] Generating lib/rte_regexdev_mingw with a custom command 00:01:54.218 [280/745] Generating lib/rte_dmadev_def with a custom command 00:01:54.218 [281/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:54.218 [282/745] Generating lib/rte_dmadev_mingw with a custom command 00:01:54.218 [283/745] Generating lib/rte_rib_def with a custom command 00:01:54.218 [284/745] Generating lib/rte_rib_mingw with a custom command 00:01:54.218 [285/745] Generating lib/rte_reorder_def with a custom command 00:01:54.478 [286/745] Generating lib/rte_reorder_mingw with a custom command 00:01:54.478 [287/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:54.478 [288/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:01:54.479 [289/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.479 [290/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.479 [291/745] Generating lib/rte_sched_def with a custom command 00:01:54.479 [292/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:54.479 [293/745] Generating lib/rte_sched_mingw with a custom command 00:01:54.479 [294/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:54.741 [295/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.741 [296/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:54.741 [297/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:54.741 [298/745] Generating lib/rte_security_def with a custom command 00:01:54.741 [299/745] Generating lib/rte_security_mingw with a custom command 00:01:54.741 [300/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:54.741 [301/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:54.741 [302/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:54.741 [303/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:54.741 [304/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:54.741 [305/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:54.741 [306/745] Linking static target lib/librte_latencystats.a 00:01:54.741 [307/745] Generating lib/rte_stack_def with a custom command 00:01:54.741 [308/745] Generating lib/rte_stack_mingw with a custom command 00:01:54.741 [309/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:54.741 [310/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:54.741 [311/745] Linking static target lib/librte_rawdev.a 00:01:54.741 [312/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:54.741 [313/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:54.741 [314/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:54.741 [315/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:54.741 [316/745] Linking static target lib/librte_stack.a 00:01:54.741 [317/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:54.741 [318/745] Generating lib/rte_vhost_def with a custom command 00:01:55.004 [319/745] Generating lib/rte_vhost_mingw with a custom command 00:01:55.004 [320/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:55.004 [321/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:55.004 [322/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:55.004 [323/745] Linking static target lib/librte_dmadev.a 00:01:55.004 [324/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:55.004 [325/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.004 [326/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:55.004 [327/745] Linking static target lib/librte_ip_frag.a 00:01:55.004 [328/745] Generating lib/rte_ipsec_def with a custom command 00:01:55.269 [329/745] Generating lib/rte_ipsec_mingw with a custom command 00:01:55.269 [330/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:55.269 [331/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.269 [332/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:55.269 [333/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.269 [334/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:01:55.538 [335/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.538 [336/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:55.538 [337/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.538 [338/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:55.538 [339/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:55.538 [340/745] Generating lib/rte_fib_def with a custom command 00:01:55.538 [341/745] Generating lib/rte_fib_mingw with a custom command 00:01:55.538 [342/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:55.538 [343/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:55.538 [344/745] Linking static target lib/librte_regexdev.a 00:01:55.538 [345/745] Linking static target lib/librte_gso.a 00:01:55.798 [346/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.798 [347/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:55.798 [348/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:55.798 [349/745] Linking static target lib/librte_efd.a 00:01:56.059 [350/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.059 [351/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:56.059 [352/745] Linking static target lib/librte_pcapng.a 00:01:56.059 [353/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:56.059 [354/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:56.059 [355/745] Linking static target lib/librte_lpm.a 00:01:56.059 [356/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:56.059 [357/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:56.326 [358/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:56.326 [359/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:56.326 [360/745] Linking static target lib/librte_reorder.a 00:01:56.326 [361/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.326 [362/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:56.326 [363/745] Generating lib/rte_port_def with a custom command 00:01:56.326 [364/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:56.326 [365/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:56.326 [366/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:56.326 [367/745] Linking static target lib/acl/libavx2_tmp.a 00:01:56.326 [368/745] Generating lib/rte_port_mingw with a custom command 00:01:56.587 [369/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:56.587 [370/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:56.587 [371/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.587 [372/745] Generating lib/rte_pdump_def with a custom command 00:01:56.587 [373/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:56.587 [374/745] Generating lib/rte_pdump_mingw with a custom command 00:01:56.587 [375/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:56.587 [376/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:56.587 [377/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:56.587 [378/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:56.587 [379/745] Linking static target lib/librte_security.a 00:01:56.587 [380/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:56.587 [381/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.587 [382/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:56.855 [383/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.855 [384/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.855 [385/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:56.855 [386/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:56.855 [387/745] Linking static target lib/librte_hash.a 00:01:56.855 [388/745] Linking static target lib/librte_power.a 00:01:56.855 [389/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:56.855 [390/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:56.855 [391/745] Linking static target lib/librte_rib.a 00:01:57.116 [392/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:57.116 [393/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:57.116 [394/745] Linking static target lib/acl/libavx512_tmp.a 00:01:57.116 [395/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:57.116 [396/745] Linking static target lib/librte_acl.a 00:01:57.116 [397/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:57.116 [398/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.116 [399/745] Generating lib/rte_table_def with a custom command 00:01:57.378 [400/745] Generating lib/rte_table_mingw with a custom command 00:01:57.378 [401/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:57.639 [402/745] Linking static target lib/librte_ethdev.a 00:01:57.639 [403/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:57.639 [404/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.639 [405/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.639 [406/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:57.639 [407/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:57.639 [408/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:57.905 [409/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:57.905 [410/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:57.905 [411/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:57.905 [412/745] Generating lib/rte_pipeline_def with a custom command 00:01:57.905 [413/745] Generating lib/rte_pipeline_mingw with a custom command 00:01:57.905 [414/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.905 [415/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:57.905 [416/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:57.905 [417/745] Linking static target lib/librte_mbuf.a 00:01:57.905 [418/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:57.905 [419/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:57.905 [420/745] Generating lib/rte_graph_def with a custom command 00:01:57.905 [421/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:57.905 [422/745] Generating lib/rte_graph_mingw with a custom command 00:01:57.905 [423/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:57.905 [424/745] Linking static target lib/librte_fib.a 00:01:58.174 [425/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:58.174 [426/745] Linking static target lib/librte_member.a 00:01:58.174 [427/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:58.174 [428/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:58.174 [429/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:58.174 [430/745] Linking static target lib/librte_eventdev.a 00:01:58.174 [431/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:58.174 [432/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:58.174 [433/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.174 [434/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:58.174 [435/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:58.438 [436/745] Generating lib/rte_node_def with a custom command 00:01:58.438 [437/745] Generating lib/rte_node_mingw with a custom command 00:01:58.438 [438/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:58.438 [439/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:58.438 [440/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:58.438 [441/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.438 [442/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:58.705 [443/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.705 [444/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:58.705 [445/745] Linking static target lib/librte_sched.a 00:01:58.705 [446/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:01:58.705 [447/745] Generating drivers/rte_bus_pci_def with a custom command 00:01:58.705 [448/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:58.705 [449/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:58.705 [450/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:58.705 [451/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.705 [452/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:58.705 [453/745] Generating drivers/rte_bus_vdev_def with a custom command 00:01:58.705 [454/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:01:58.705 [455/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:01:58.705 [456/745] Generating drivers/rte_mempool_ring_def with a custom command 00:01:58.705 [457/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:58.967 [458/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:58.967 [459/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:58.967 [460/745] Linking static target lib/librte_cryptodev.a 00:01:58.967 [461/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:58.967 [462/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:58.967 [463/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:58.967 [464/745] Linking static target lib/librte_pdump.a 00:01:58.967 [465/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:58.967 [466/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:58.967 [467/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:58.967 [468/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:59.225 [469/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:59.225 [470/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:59.225 [471/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:59.225 [472/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:59.225 [473/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:59.225 [474/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:59.225 [475/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:59.225 [476/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:59.225 [477/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:59.225 [478/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.225 [479/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:59.486 [480/745] Generating drivers/rte_net_i40e_def with a custom command 00:01:59.486 [481/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:01:59.486 [482/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:59.486 [483/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.486 [484/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:59.486 [485/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:59.486 [486/745] Linking static target drivers/librte_bus_vdev.a 00:01:59.749 [487/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:59.749 [488/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:59.749 [489/745] Linking static target lib/librte_table.a 00:01:59.749 [490/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:59.749 [491/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:59.749 [492/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:59.749 [493/745] Linking static target lib/librte_ipsec.a 00:01:59.749 [494/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:00.024 [495/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.024 [496/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:00.024 [497/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:00.024 [498/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:00.287 [499/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:00.287 [500/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:00.287 [501/745] Linking static target lib/librte_graph.a 00:02:00.287 [502/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:00.287 [503/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:00.287 [504/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:00.287 [505/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:00.287 [506/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:00.287 [507/745] Linking static target drivers/librte_bus_pci.a 00:02:00.287 [508/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:00.287 [509/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:00.287 [510/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:00.287 [511/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:00.550 [512/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.550 [513/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:00.821 [514/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.821 [515/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:00.821 [516/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.081 [517/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:01.081 [518/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:01.081 [519/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.081 [520/745] Linking static target lib/librte_port.a 00:02:01.081 [521/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:01.081 [522/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:01.349 [523/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:01.349 [524/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:01.349 [525/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:01.349 [526/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:01.608 [527/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.608 [528/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:01.608 [529/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:01.608 [530/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:01.608 [531/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:01.608 [532/745] Linking static target drivers/librte_mempool_ring.a 00:02:01.869 [533/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:01.869 [534/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:01.869 [535/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:01.869 [536/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:01.869 [537/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:02.134 [538/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.134 [539/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:02.134 [540/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:02.134 [541/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.399 [542/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:02.399 [543/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:02.679 [544/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:02.679 [545/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:02.679 [546/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:02.679 [547/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:02.965 [548/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:02.965 [549/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:02.965 [550/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:02.965 [551/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:02.965 [552/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:03.239 [553/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:03.501 [554/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:03.501 [555/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:03.501 [556/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:03.501 [557/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:03.767 [558/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:03.767 [559/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:03.767 [560/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:04.034 [561/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:04.034 [562/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:04.034 [563/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:04.295 [564/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:04.295 [565/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:04.295 [566/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:04.295 [567/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:04.295 [568/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:04.295 [569/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:04.295 [570/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:04.295 [571/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:04.559 [572/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:04.559 [573/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:04.559 [574/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:04.824 [575/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:04.824 [576/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.824 [577/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:04.824 [578/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:04.824 [579/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:04.824 [580/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:04.824 [581/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:04.824 [582/745] Linking target lib/librte_eal.so.23.0 00:02:04.824 [583/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:05.088 [584/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:05.088 [585/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:05.088 [586/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:05.088 [587/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.088 [588/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:05.347 [589/745] Linking target lib/librte_ring.so.23.0 00:02:05.347 [590/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:05.347 [591/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:05.611 [592/745] Linking target lib/librte_meter.so.23.0 00:02:05.611 [593/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:05.611 [594/745] Linking target lib/librte_pci.so.23.0 00:02:05.611 [595/745] Linking target lib/librte_rcu.so.23.0 00:02:05.611 [596/745] Linking target lib/librte_mempool.so.23.0 00:02:05.611 [597/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:05.611 [598/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:05.874 [599/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:05.874 [600/745] Linking target lib/librte_timer.so.23.0 00:02:05.874 [601/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:05.874 [602/745] Linking target lib/librte_acl.so.23.0 00:02:05.874 [603/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:05.874 [604/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:05.874 [605/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:05.874 [606/745] Linking target lib/librte_cfgfile.so.23.0 00:02:05.874 [607/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:05.874 [608/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:05.874 [609/745] Linking target lib/librte_jobstats.so.23.0 00:02:05.874 [610/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:05.874 [611/745] Linking target lib/librte_mbuf.so.23.0 00:02:05.874 [612/745] Linking target lib/librte_rawdev.so.23.0 00:02:05.874 [613/745] Linking target lib/librte_dmadev.so.23.0 00:02:05.874 [614/745] Linking target lib/librte_rib.so.23.0 00:02:05.874 [615/745] Linking target lib/librte_stack.so.23.0 00:02:05.874 [616/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:06.137 [617/745] Linking target lib/librte_graph.so.23.0 00:02:06.137 [618/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:06.137 [619/745] Linking target drivers/librte_bus_vdev.so.23.0 00:02:06.137 [620/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:06.137 [621/745] Linking target drivers/librte_bus_pci.so.23.0 00:02:06.137 [622/745] Linking target drivers/librte_mempool_ring.so.23.0 00:02:06.137 [623/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:06.137 [624/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:06.137 [625/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:06.137 [626/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:06.137 [627/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:06.137 [628/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:06.137 [629/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:06.137 [630/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:06.137 [631/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:06.137 [632/745] Linking target lib/librte_gpudev.so.23.0 00:02:06.137 [633/745] Linking target lib/librte_compressdev.so.23.0 00:02:06.395 [634/745] Linking target lib/librte_reorder.so.23.0 00:02:06.395 [635/745] Linking target lib/librte_cryptodev.so.23.0 00:02:06.395 [636/745] Linking target lib/librte_distributor.so.23.0 00:02:06.395 [637/745] Linking target lib/librte_bbdev.so.23.0 00:02:06.395 [638/745] Linking target lib/librte_sched.so.23.0 00:02:06.395 [639/745] Linking target lib/librte_net.so.23.0 00:02:06.395 [640/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:06.395 [641/745] Linking target lib/librte_regexdev.so.23.0 00:02:06.395 [642/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:06.395 [643/745] Linking target lib/librte_fib.so.23.0 00:02:06.395 [644/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:06.395 [645/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:06.395 [646/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:06.395 [647/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:06.395 [648/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:06.395 [649/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:06.395 [650/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:06.395 [651/745] Linking target lib/librte_security.so.23.0 00:02:06.395 [652/745] Linking target lib/librte_cmdline.so.23.0 00:02:06.395 [653/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:06.395 [654/745] Linking target lib/librte_hash.so.23.0 00:02:06.654 [655/745] Linking target lib/librte_ethdev.so.23.0 00:02:06.654 [656/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:06.654 [657/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:06.654 [658/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:06.654 [659/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:06.654 [660/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:06.654 [661/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:06.654 [662/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:06.654 [663/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:06.654 [664/745] Linking target lib/librte_bpf.so.23.0 00:02:06.654 [665/745] Linking target lib/librte_lpm.so.23.0 00:02:06.654 [666/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:06.654 [667/745] Linking target lib/librte_efd.so.23.0 00:02:06.654 [668/745] Linking target lib/librte_member.so.23.0 00:02:06.654 [669/745] Linking target lib/librte_pcapng.so.23.0 00:02:06.654 [670/745] Linking target lib/librte_eventdev.so.23.0 00:02:06.654 [671/745] Linking target lib/librte_gso.so.23.0 00:02:06.654 [672/745] Linking target lib/librte_ipsec.so.23.0 00:02:06.654 [673/745] Linking target lib/librte_metrics.so.23.0 00:02:06.654 [674/745] Linking target lib/librte_gro.so.23.0 00:02:06.654 [675/745] Linking target lib/librte_ip_frag.so.23.0 00:02:06.654 [676/745] Linking target lib/librte_power.so.23.0 00:02:06.914 [677/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:06.914 [678/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:06.914 [679/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:06.914 [680/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:06.914 [681/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:06.914 [682/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:06.914 [683/745] Linking target lib/librte_port.so.23.0 00:02:06.914 [684/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:06.914 [685/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:06.914 [686/745] Linking target lib/librte_pdump.so.23.0 00:02:06.914 [687/745] Linking target lib/librte_latencystats.so.23.0 00:02:06.914 [688/745] Linking target lib/librte_bitratestats.so.23.0 00:02:07.173 [689/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:07.173 [690/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:07.173 [691/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:07.173 [692/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:07.173 [693/745] Linking target lib/librte_table.so.23.0 00:02:07.432 [694/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:07.690 [695/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:07.690 [696/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:07.948 [697/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:07.948 [698/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:07.948 [699/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:07.948 [700/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:07.948 [701/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:08.206 [702/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:08.206 [703/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:08.464 [704/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:08.464 [705/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:08.464 [706/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:08.464 [707/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:08.464 [708/745] Linking static target drivers/librte_net_i40e.a 00:02:08.721 [709/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:08.979 [710/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.979 [711/745] Linking target drivers/librte_net_i40e.so.23.0 00:02:08.979 [712/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:09.911 [713/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:09.911 [714/745] Linking static target lib/librte_node.a 00:02:10.169 [715/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.169 [716/745] Linking target lib/librte_node.so.23.0 00:02:10.735 [717/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:10.992 [718/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:11.926 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:21.899 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:48.481 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:48.481 [722/745] Linking static target lib/librte_vhost.a 00:02:49.852 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.852 [724/745] Linking target lib/librte_vhost.so.23.0 00:03:11.783 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:11.783 [726/745] Linking static target lib/librte_pipeline.a 00:03:11.783 [727/745] Linking target app/dpdk-dumpcap 00:03:11.783 [728/745] Linking target app/dpdk-proc-info 00:03:11.783 [729/745] Linking target app/dpdk-test-regex 00:03:11.783 [730/745] Linking target app/dpdk-test-flow-perf 00:03:11.783 [731/745] Linking target app/dpdk-test-pipeline 00:03:11.783 [732/745] Linking target app/dpdk-test-security-perf 00:03:11.783 [733/745] Linking target app/dpdk-test-bbdev 00:03:11.783 [734/745] Linking target app/dpdk-test-eventdev 00:03:11.783 [735/745] Linking target app/dpdk-test-cmdline 00:03:11.783 [736/745] Linking target app/dpdk-pdump 00:03:11.783 [737/745] Linking target app/dpdk-test-sad 00:03:11.783 [738/745] Linking target app/dpdk-test-fib 00:03:11.783 [739/745] Linking target app/dpdk-test-acl 00:03:11.783 [740/745] Linking target app/dpdk-test-gpudev 00:03:11.783 [741/745] Linking target app/dpdk-test-compress-perf 00:03:11.783 [742/745] Linking target app/dpdk-test-crypto-perf 00:03:11.783 [743/745] Linking target app/dpdk-testpmd 00:03:12.042 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.042 [745/745] Linking target lib/librte_pipeline.so.23.0 00:03:12.042 22:33:04 build_native_dpdk -- common/autobuild_common.sh@190 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:03:12.300 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:12.300 [0/1] Installing files. 00:03:12.564 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:12.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:12.566 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:12.567 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:12.568 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:12.569 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:12.569 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.570 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.142 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.142 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.142 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.142 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.142 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.142 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.142 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.142 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.142 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.142 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:13.142 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.142 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:13.142 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.142 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:13.142 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.142 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:13.142 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.142 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.142 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.142 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.142 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.142 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.142 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.142 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.142 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.142 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.142 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.142 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.142 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.142 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.142 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.142 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.142 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:13.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:13.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:13.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:13.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:13.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:13.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:13.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:13.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:13.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:13.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:13.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:13.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:13.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:13.146 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:03:13.146 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:13.146 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:03:13.146 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:13.146 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:03:13.146 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:13.146 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:03:13.146 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:13.146 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:03:13.146 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:13.146 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:03:13.146 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:13.146 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:03:13.146 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:13.146 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:03:13.146 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:03:13.146 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:03:13.146 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:13.146 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:03:13.146 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:13.146 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:03:13.146 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:13.146 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:03:13.146 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:13.146 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:03:13.146 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:13.146 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:03:13.146 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:13.146 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:03:13.146 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:13.146 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:03:13.146 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:13.146 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:03:13.146 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:13.146 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:03:13.146 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:13.146 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:03:13.146 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:13.146 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:03:13.147 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:13.147 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:03:13.147 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:13.147 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:03:13.147 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:13.147 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:03:13.147 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:13.147 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:03:13.147 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:13.147 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:03:13.147 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:13.147 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:03:13.147 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:13.147 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:03:13.147 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:13.147 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:03:13.147 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:13.147 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:03:13.147 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:13.147 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:03:13.147 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:13.147 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:03:13.147 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:13.147 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:03:13.147 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:13.147 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:03:13.147 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:03:13.147 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:03:13.147 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:13.147 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:03:13.147 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:03:13.147 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:03:13.147 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:13.147 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:03:13.147 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:13.147 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:03:13.147 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:13.147 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:03:13.147 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:13.147 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:03:13.147 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:13.147 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:03:13.147 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:13.147 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:03:13.147 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:03:13.147 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:03:13.147 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:13.147 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:03:13.147 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:13.147 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:03:13.147 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:13.147 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:03:13.147 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:13.147 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:03:13.147 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:03:13.147 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:03:13.147 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:13.147 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:03:13.147 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:03:13.147 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:03:13.147 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:13.147 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:03:13.147 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:13.147 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:03:13.147 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:03:13.147 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:13.147 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:13.147 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:13.147 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:13.147 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:13.147 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:13.147 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:13.147 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:13.147 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:13.147 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:13.147 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:13.147 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:13.147 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:13.147 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:13.147 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:13.147 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:13.147 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:13.147 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:13.147 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:13.147 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:13.147 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:13.147 22:33:05 build_native_dpdk -- common/autobuild_common.sh@192 -- $ uname -s 00:03:13.147 22:33:05 build_native_dpdk -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:13.147 22:33:05 build_native_dpdk -- common/autobuild_common.sh@203 -- $ cat 00:03:13.147 22:33:05 build_native_dpdk -- common/autobuild_common.sh@208 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:13.147 00:03:13.148 real 1m29.087s 00:03:13.148 user 14m30.900s 00:03:13.148 sys 1m47.835s 00:03:13.148 22:33:05 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:13.148 22:33:05 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:13.148 ************************************ 00:03:13.148 END TEST build_native_dpdk 00:03:13.148 ************************************ 00:03:13.148 22:33:05 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:13.148 22:33:05 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:13.148 22:33:05 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:13.148 22:33:05 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:13.148 22:33:05 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:13.148 22:33:05 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:13.148 22:33:05 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:13.148 22:33:05 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:03:13.406 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:13.406 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.406 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.406 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:13.664 Using 'verbs' RDMA provider 00:03:24.266 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:32.381 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:32.640 Creating mk/config.mk...done. 00:03:32.640 Creating mk/cc.flags.mk...done. 00:03:32.640 Type 'make' to build. 00:03:32.640 22:33:25 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:32.640 22:33:25 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:03:32.640 22:33:25 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:32.640 22:33:25 -- common/autotest_common.sh@10 -- $ set +x 00:03:32.640 ************************************ 00:03:32.640 START TEST make 00:03:32.640 ************************************ 00:03:32.640 22:33:25 make -- common/autotest_common.sh@1121 -- $ make -j48 00:03:32.898 make[1]: Nothing to be done for 'all'. 00:03:34.828 The Meson build system 00:03:34.828 Version: 1.3.1 00:03:34.828 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:34.828 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:34.828 Build type: native build 00:03:34.828 Project name: libvfio-user 00:03:34.828 Project version: 0.0.1 00:03:34.828 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:34.828 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:34.828 Host machine cpu family: x86_64 00:03:34.828 Host machine cpu: x86_64 00:03:34.828 Run-time dependency threads found: YES 00:03:34.828 Library dl found: YES 00:03:34.828 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:34.828 Run-time dependency json-c found: YES 0.17 00:03:34.828 Run-time dependency cmocka found: YES 1.1.7 00:03:34.828 Program pytest-3 found: NO 00:03:34.828 Program flake8 found: NO 00:03:34.828 Program misspell-fixer found: NO 00:03:34.828 Program restructuredtext-lint found: NO 00:03:34.828 Program valgrind found: YES (/usr/bin/valgrind) 00:03:34.828 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:34.828 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:34.828 Compiler for C supports arguments -Wwrite-strings: YES 00:03:34.828 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:34.828 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:34.828 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:34.828 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:34.828 Build targets in project: 8 00:03:34.828 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:34.828 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:34.828 00:03:34.828 libvfio-user 0.0.1 00:03:34.828 00:03:34.828 User defined options 00:03:34.828 buildtype : debug 00:03:34.828 default_library: shared 00:03:34.828 libdir : /usr/local/lib 00:03:34.828 00:03:34.828 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:35.400 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:35.400 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:35.400 [2/37] Compiling C object samples/null.p/null.c.o 00:03:35.400 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:35.660 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:35.660 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:35.660 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:35.660 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:35.660 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:35.660 [9/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:35.660 [10/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:35.660 [11/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:35.660 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:35.660 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:35.660 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:35.660 [15/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:35.660 [16/37] Compiling C object samples/server.p/server.c.o 00:03:35.660 [17/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:35.660 [18/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:35.660 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:35.660 [20/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:35.660 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:35.660 [22/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:35.660 [23/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:35.660 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:35.660 [25/37] Compiling C object samples/client.p/client.c.o 00:03:35.660 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:35.660 [27/37] Linking target samples/client 00:03:35.926 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:35.926 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:03:35.926 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:35.926 [31/37] Linking target test/unit_tests 00:03:36.189 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:36.189 [33/37] Linking target samples/server 00:03:36.189 [34/37] Linking target samples/null 00:03:36.189 [35/37] Linking target samples/lspci 00:03:36.189 [36/37] Linking target samples/shadow_ioeventfd_server 00:03:36.189 [37/37] Linking target samples/gpio-pci-idio-16 00:03:36.189 INFO: autodetecting backend as ninja 00:03:36.189 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:36.189 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:37.132 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:37.132 ninja: no work to do. 00:03:49.332 CC lib/ut/ut.o 00:03:49.332 CC lib/log/log.o 00:03:49.332 CC lib/log/log_flags.o 00:03:49.332 CC lib/log/log_deprecated.o 00:03:49.332 CC lib/ut_mock/mock.o 00:03:49.332 LIB libspdk_log.a 00:03:49.332 LIB libspdk_ut.a 00:03:49.332 LIB libspdk_ut_mock.a 00:03:49.332 SO libspdk_ut.so.2.0 00:03:49.332 SO libspdk_ut_mock.so.6.0 00:03:49.332 SO libspdk_log.so.7.0 00:03:49.332 SYMLINK libspdk_ut.so 00:03:49.332 SYMLINK libspdk_ut_mock.so 00:03:49.332 SYMLINK libspdk_log.so 00:03:49.332 CC lib/dma/dma.o 00:03:49.332 CXX lib/trace_parser/trace.o 00:03:49.332 CC lib/ioat/ioat.o 00:03:49.332 CC lib/util/base64.o 00:03:49.332 CC lib/util/bit_array.o 00:03:49.332 CC lib/util/cpuset.o 00:03:49.332 CC lib/util/crc16.o 00:03:49.332 CC lib/util/crc32.o 00:03:49.332 CC lib/util/crc32c.o 00:03:49.332 CC lib/util/crc32_ieee.o 00:03:49.332 CC lib/util/crc64.o 00:03:49.332 CC lib/util/dif.o 00:03:49.332 CC lib/util/fd.o 00:03:49.332 CC lib/util/file.o 00:03:49.332 CC lib/util/hexlify.o 00:03:49.332 CC lib/util/iov.o 00:03:49.332 CC lib/util/math.o 00:03:49.332 CC lib/util/pipe.o 00:03:49.332 CC lib/util/strerror_tls.o 00:03:49.332 CC lib/util/string.o 00:03:49.332 CC lib/util/uuid.o 00:03:49.332 CC lib/util/fd_group.o 00:03:49.332 CC lib/util/xor.o 00:03:49.332 CC lib/util/zipf.o 00:03:49.332 CC lib/vfio_user/host/vfio_user_pci.o 00:03:49.332 CC lib/vfio_user/host/vfio_user.o 00:03:49.332 LIB libspdk_dma.a 00:03:49.332 SO libspdk_dma.so.4.0 00:03:49.332 SYMLINK libspdk_dma.so 00:03:49.332 LIB libspdk_ioat.a 00:03:49.332 SO libspdk_ioat.so.7.0 00:03:49.332 LIB libspdk_vfio_user.a 00:03:49.332 SYMLINK libspdk_ioat.so 00:03:49.332 SO libspdk_vfio_user.so.5.0 00:03:49.332 SYMLINK libspdk_vfio_user.so 00:03:49.332 LIB libspdk_util.a 00:03:49.332 SO libspdk_util.so.9.0 00:03:49.332 SYMLINK libspdk_util.so 00:03:49.590 CC lib/vmd/vmd.o 00:03:49.590 CC lib/idxd/idxd.o 00:03:49.590 CC lib/json/json_parse.o 00:03:49.590 CC lib/rdma/common.o 00:03:49.590 CC lib/conf/conf.o 00:03:49.590 CC lib/env_dpdk/env.o 00:03:49.590 CC lib/idxd/idxd_user.o 00:03:49.590 CC lib/json/json_util.o 00:03:49.590 CC lib/vmd/led.o 00:03:49.590 CC lib/rdma/rdma_verbs.o 00:03:49.590 CC lib/idxd/idxd_kernel.o 00:03:49.590 CC lib/env_dpdk/memory.o 00:03:49.590 CC lib/json/json_write.o 00:03:49.590 CC lib/env_dpdk/pci.o 00:03:49.590 CC lib/env_dpdk/init.o 00:03:49.590 CC lib/env_dpdk/threads.o 00:03:49.590 CC lib/env_dpdk/pci_ioat.o 00:03:49.590 CC lib/env_dpdk/pci_virtio.o 00:03:49.590 CC lib/env_dpdk/pci_vmd.o 00:03:49.590 CC lib/env_dpdk/pci_idxd.o 00:03:49.590 CC lib/env_dpdk/pci_event.o 00:03:49.590 CC lib/env_dpdk/sigbus_handler.o 00:03:49.590 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:49.590 CC lib/env_dpdk/pci_dpdk.o 00:03:49.590 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:49.590 LIB libspdk_trace_parser.a 00:03:49.590 SO libspdk_trace_parser.so.5.0 00:03:49.590 SYMLINK libspdk_trace_parser.so 00:03:49.849 LIB libspdk_json.a 00:03:49.849 LIB libspdk_rdma.a 00:03:49.849 LIB libspdk_conf.a 00:03:49.849 SO libspdk_json.so.6.0 00:03:49.849 SO libspdk_rdma.so.6.0 00:03:49.849 SO libspdk_conf.so.6.0 00:03:49.849 SYMLINK libspdk_json.so 00:03:49.849 SYMLINK libspdk_rdma.so 00:03:49.849 SYMLINK libspdk_conf.so 00:03:50.107 CC lib/jsonrpc/jsonrpc_server.o 00:03:50.107 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:50.107 CC lib/jsonrpc/jsonrpc_client.o 00:03:50.107 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:50.107 LIB libspdk_idxd.a 00:03:50.107 SO libspdk_idxd.so.12.0 00:03:50.107 SYMLINK libspdk_idxd.so 00:03:50.107 LIB libspdk_vmd.a 00:03:50.107 SO libspdk_vmd.so.6.0 00:03:50.365 SYMLINK libspdk_vmd.so 00:03:50.365 LIB libspdk_jsonrpc.a 00:03:50.365 SO libspdk_jsonrpc.so.6.0 00:03:50.365 SYMLINK libspdk_jsonrpc.so 00:03:50.623 CC lib/rpc/rpc.o 00:03:50.891 LIB libspdk_rpc.a 00:03:50.892 SO libspdk_rpc.so.6.0 00:03:50.892 SYMLINK libspdk_rpc.so 00:03:51.148 CC lib/trace/trace.o 00:03:51.148 CC lib/keyring/keyring.o 00:03:51.148 CC lib/notify/notify.o 00:03:51.148 CC lib/keyring/keyring_rpc.o 00:03:51.148 CC lib/trace/trace_flags.o 00:03:51.148 CC lib/notify/notify_rpc.o 00:03:51.148 CC lib/trace/trace_rpc.o 00:03:51.148 LIB libspdk_notify.a 00:03:51.148 SO libspdk_notify.so.6.0 00:03:51.406 LIB libspdk_keyring.a 00:03:51.406 SYMLINK libspdk_notify.so 00:03:51.406 LIB libspdk_trace.a 00:03:51.406 SO libspdk_keyring.so.1.0 00:03:51.406 SO libspdk_trace.so.10.0 00:03:51.406 SYMLINK libspdk_keyring.so 00:03:51.406 SYMLINK libspdk_trace.so 00:03:51.664 CC lib/thread/thread.o 00:03:51.664 CC lib/thread/iobuf.o 00:03:51.664 CC lib/sock/sock.o 00:03:51.664 CC lib/sock/sock_rpc.o 00:03:51.664 LIB libspdk_env_dpdk.a 00:03:51.664 SO libspdk_env_dpdk.so.14.0 00:03:51.922 SYMLINK libspdk_env_dpdk.so 00:03:51.922 LIB libspdk_sock.a 00:03:51.922 SO libspdk_sock.so.9.0 00:03:51.922 SYMLINK libspdk_sock.so 00:03:52.192 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:52.192 CC lib/nvme/nvme_ctrlr.o 00:03:52.192 CC lib/nvme/nvme_fabric.o 00:03:52.192 CC lib/nvme/nvme_ns_cmd.o 00:03:52.192 CC lib/nvme/nvme_ns.o 00:03:52.192 CC lib/nvme/nvme_pcie_common.o 00:03:52.192 CC lib/nvme/nvme_pcie.o 00:03:52.192 CC lib/nvme/nvme_qpair.o 00:03:52.192 CC lib/nvme/nvme.o 00:03:52.192 CC lib/nvme/nvme_quirks.o 00:03:52.192 CC lib/nvme/nvme_transport.o 00:03:52.192 CC lib/nvme/nvme_discovery.o 00:03:52.192 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:52.192 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:52.192 CC lib/nvme/nvme_tcp.o 00:03:52.192 CC lib/nvme/nvme_opal.o 00:03:52.192 CC lib/nvme/nvme_io_msg.o 00:03:52.192 CC lib/nvme/nvme_poll_group.o 00:03:52.192 CC lib/nvme/nvme_zns.o 00:03:52.192 CC lib/nvme/nvme_stubs.o 00:03:52.192 CC lib/nvme/nvme_auth.o 00:03:52.192 CC lib/nvme/nvme_cuse.o 00:03:52.192 CC lib/nvme/nvme_vfio_user.o 00:03:52.192 CC lib/nvme/nvme_rdma.o 00:03:53.142 LIB libspdk_thread.a 00:03:53.142 SO libspdk_thread.so.10.0 00:03:53.401 SYMLINK libspdk_thread.so 00:03:53.401 CC lib/accel/accel.o 00:03:53.401 CC lib/virtio/virtio.o 00:03:53.401 CC lib/blob/blobstore.o 00:03:53.401 CC lib/vfu_tgt/tgt_endpoint.o 00:03:53.401 CC lib/accel/accel_rpc.o 00:03:53.401 CC lib/virtio/virtio_vhost_user.o 00:03:53.401 CC lib/blob/request.o 00:03:53.401 CC lib/init/json_config.o 00:03:53.401 CC lib/vfu_tgt/tgt_rpc.o 00:03:53.401 CC lib/accel/accel_sw.o 00:03:53.401 CC lib/virtio/virtio_vfio_user.o 00:03:53.401 CC lib/init/subsystem.o 00:03:53.401 CC lib/blob/zeroes.o 00:03:53.401 CC lib/virtio/virtio_pci.o 00:03:53.401 CC lib/init/subsystem_rpc.o 00:03:53.401 CC lib/blob/blob_bs_dev.o 00:03:53.401 CC lib/init/rpc.o 00:03:53.659 LIB libspdk_init.a 00:03:53.659 SO libspdk_init.so.5.0 00:03:53.917 LIB libspdk_virtio.a 00:03:53.917 LIB libspdk_vfu_tgt.a 00:03:53.917 SYMLINK libspdk_init.so 00:03:53.917 SO libspdk_vfu_tgt.so.3.0 00:03:53.917 SO libspdk_virtio.so.7.0 00:03:53.917 SYMLINK libspdk_vfu_tgt.so 00:03:53.917 SYMLINK libspdk_virtio.so 00:03:53.917 CC lib/event/app.o 00:03:53.917 CC lib/event/reactor.o 00:03:53.917 CC lib/event/log_rpc.o 00:03:53.917 CC lib/event/app_rpc.o 00:03:53.917 CC lib/event/scheduler_static.o 00:03:54.484 LIB libspdk_event.a 00:03:54.484 SO libspdk_event.so.13.0 00:03:54.484 LIB libspdk_accel.a 00:03:54.484 SYMLINK libspdk_event.so 00:03:54.484 SO libspdk_accel.so.15.0 00:03:54.484 LIB libspdk_nvme.a 00:03:54.484 SYMLINK libspdk_accel.so 00:03:54.742 SO libspdk_nvme.so.13.0 00:03:54.742 CC lib/bdev/bdev.o 00:03:54.742 CC lib/bdev/bdev_rpc.o 00:03:54.742 CC lib/bdev/bdev_zone.o 00:03:54.742 CC lib/bdev/part.o 00:03:54.742 CC lib/bdev/scsi_nvme.o 00:03:55.001 SYMLINK libspdk_nvme.so 00:03:56.903 LIB libspdk_blob.a 00:03:56.903 SO libspdk_blob.so.11.0 00:03:56.903 SYMLINK libspdk_blob.so 00:03:56.903 CC lib/blobfs/blobfs.o 00:03:56.903 CC lib/blobfs/tree.o 00:03:56.903 CC lib/lvol/lvol.o 00:03:57.470 LIB libspdk_bdev.a 00:03:57.470 SO libspdk_bdev.so.15.0 00:03:57.470 SYMLINK libspdk_bdev.so 00:03:57.736 CC lib/ublk/ublk.o 00:03:57.736 CC lib/scsi/dev.o 00:03:57.736 CC lib/nbd/nbd.o 00:03:57.736 CC lib/ublk/ublk_rpc.o 00:03:57.736 CC lib/scsi/lun.o 00:03:57.736 LIB libspdk_blobfs.a 00:03:57.736 CC lib/nvmf/ctrlr.o 00:03:57.736 CC lib/nbd/nbd_rpc.o 00:03:57.736 CC lib/scsi/port.o 00:03:57.736 CC lib/nvmf/ctrlr_discovery.o 00:03:57.736 CC lib/scsi/scsi.o 00:03:57.736 CC lib/ftl/ftl_core.o 00:03:57.736 CC lib/nvmf/ctrlr_bdev.o 00:03:57.736 CC lib/scsi/scsi_bdev.o 00:03:57.736 CC lib/ftl/ftl_init.o 00:03:57.736 CC lib/nvmf/subsystem.o 00:03:57.736 CC lib/scsi/scsi_pr.o 00:03:57.736 CC lib/nvmf/nvmf.o 00:03:57.736 CC lib/ftl/ftl_layout.o 00:03:57.736 CC lib/scsi/scsi_rpc.o 00:03:57.736 CC lib/ftl/ftl_debug.o 00:03:57.736 CC lib/ftl/ftl_io.o 00:03:57.736 CC lib/scsi/task.o 00:03:57.736 CC lib/nvmf/nvmf_rpc.o 00:03:57.736 CC lib/nvmf/transport.o 00:03:57.736 CC lib/ftl/ftl_sb.o 00:03:57.736 CC lib/nvmf/tcp.o 00:03:57.736 CC lib/nvmf/stubs.o 00:03:57.736 CC lib/ftl/ftl_l2p.o 00:03:57.736 CC lib/nvmf/mdns_server.o 00:03:57.736 CC lib/ftl/ftl_l2p_flat.o 00:03:57.736 CC lib/ftl/ftl_nv_cache.o 00:03:57.736 CC lib/nvmf/vfio_user.o 00:03:57.736 CC lib/ftl/ftl_band.o 00:03:57.736 CC lib/nvmf/rdma.o 00:03:57.736 CC lib/nvmf/auth.o 00:03:57.736 CC lib/ftl/ftl_band_ops.o 00:03:57.736 CC lib/ftl/ftl_writer.o 00:03:57.736 CC lib/ftl/ftl_rq.o 00:03:57.736 CC lib/ftl/ftl_reloc.o 00:03:57.736 CC lib/ftl/ftl_l2p_cache.o 00:03:57.736 CC lib/ftl/ftl_p2l.o 00:03:57.736 CC lib/ftl/mngt/ftl_mngt.o 00:03:57.736 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:57.737 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:57.737 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:57.737 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:57.737 SO libspdk_blobfs.so.10.0 00:03:57.737 SYMLINK libspdk_blobfs.so 00:03:57.737 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:57.737 LIB libspdk_lvol.a 00:03:58.013 SO libspdk_lvol.so.10.0 00:03:58.013 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:58.013 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:58.013 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:58.013 SYMLINK libspdk_lvol.so 00:03:58.013 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:58.013 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:58.013 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:58.013 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:58.013 CC lib/ftl/utils/ftl_conf.o 00:03:58.013 CC lib/ftl/utils/ftl_md.o 00:03:58.013 CC lib/ftl/utils/ftl_mempool.o 00:03:58.013 CC lib/ftl/utils/ftl_bitmap.o 00:03:58.013 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:58.013 CC lib/ftl/utils/ftl_property.o 00:03:58.013 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:58.013 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:58.013 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:58.276 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:58.276 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:58.276 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:58.276 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:58.276 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:58.276 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:58.276 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:58.276 CC lib/ftl/base/ftl_base_dev.o 00:03:58.276 CC lib/ftl/base/ftl_base_bdev.o 00:03:58.276 CC lib/ftl/ftl_trace.o 00:03:58.533 LIB libspdk_nbd.a 00:03:58.533 SO libspdk_nbd.so.7.0 00:03:58.533 LIB libspdk_scsi.a 00:03:58.533 SO libspdk_scsi.so.9.0 00:03:58.533 SYMLINK libspdk_nbd.so 00:03:58.533 SYMLINK libspdk_scsi.so 00:03:58.791 LIB libspdk_ublk.a 00:03:58.791 SO libspdk_ublk.so.3.0 00:03:58.791 SYMLINK libspdk_ublk.so 00:03:58.791 CC lib/vhost/vhost.o 00:03:58.791 CC lib/iscsi/conn.o 00:03:58.791 CC lib/vhost/vhost_rpc.o 00:03:58.791 CC lib/iscsi/init_grp.o 00:03:58.791 CC lib/vhost/vhost_scsi.o 00:03:58.791 CC lib/iscsi/iscsi.o 00:03:58.791 CC lib/iscsi/md5.o 00:03:58.791 CC lib/vhost/vhost_blk.o 00:03:58.791 CC lib/vhost/rte_vhost_user.o 00:03:58.791 CC lib/iscsi/param.o 00:03:58.791 CC lib/iscsi/portal_grp.o 00:03:58.791 CC lib/iscsi/tgt_node.o 00:03:58.791 CC lib/iscsi/iscsi_subsystem.o 00:03:58.791 CC lib/iscsi/iscsi_rpc.o 00:03:58.791 CC lib/iscsi/task.o 00:03:59.049 LIB libspdk_ftl.a 00:03:59.307 SO libspdk_ftl.so.9.0 00:03:59.566 SYMLINK libspdk_ftl.so 00:04:00.131 LIB libspdk_vhost.a 00:04:00.131 SO libspdk_vhost.so.8.0 00:04:00.131 SYMLINK libspdk_vhost.so 00:04:00.131 LIB libspdk_nvmf.a 00:04:00.389 LIB libspdk_iscsi.a 00:04:00.389 SO libspdk_nvmf.so.18.0 00:04:00.389 SO libspdk_iscsi.so.8.0 00:04:00.389 SYMLINK libspdk_iscsi.so 00:04:00.389 SYMLINK libspdk_nvmf.so 00:04:00.648 CC module/env_dpdk/env_dpdk_rpc.o 00:04:00.648 CC module/vfu_device/vfu_virtio.o 00:04:00.648 CC module/vfu_device/vfu_virtio_blk.o 00:04:00.648 CC module/vfu_device/vfu_virtio_scsi.o 00:04:00.648 CC module/vfu_device/vfu_virtio_rpc.o 00:04:00.905 CC module/sock/posix/posix.o 00:04:00.905 CC module/accel/error/accel_error.o 00:04:00.905 CC module/accel/error/accel_error_rpc.o 00:04:00.905 CC module/blob/bdev/blob_bdev.o 00:04:00.905 CC module/accel/ioat/accel_ioat.o 00:04:00.905 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:00.905 CC module/keyring/file/keyring.o 00:04:00.905 CC module/accel/ioat/accel_ioat_rpc.o 00:04:00.905 CC module/scheduler/gscheduler/gscheduler.o 00:04:00.905 CC module/keyring/file/keyring_rpc.o 00:04:00.905 CC module/accel/dsa/accel_dsa.o 00:04:00.905 CC module/keyring/linux/keyring.o 00:04:00.905 CC module/accel/dsa/accel_dsa_rpc.o 00:04:00.905 CC module/keyring/linux/keyring_rpc.o 00:04:00.905 CC module/accel/iaa/accel_iaa.o 00:04:00.905 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:00.905 CC module/accel/iaa/accel_iaa_rpc.o 00:04:00.905 LIB libspdk_env_dpdk_rpc.a 00:04:00.905 SO libspdk_env_dpdk_rpc.so.6.0 00:04:00.905 SYMLINK libspdk_env_dpdk_rpc.so 00:04:01.162 LIB libspdk_keyring_file.a 00:04:01.162 LIB libspdk_keyring_linux.a 00:04:01.162 LIB libspdk_scheduler_dpdk_governor.a 00:04:01.162 LIB libspdk_scheduler_gscheduler.a 00:04:01.162 SO libspdk_keyring_file.so.1.0 00:04:01.162 SO libspdk_keyring_linux.so.1.0 00:04:01.162 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:01.162 LIB libspdk_accel_error.a 00:04:01.162 LIB libspdk_accel_ioat.a 00:04:01.162 LIB libspdk_scheduler_dynamic.a 00:04:01.162 SO libspdk_scheduler_gscheduler.so.4.0 00:04:01.162 SO libspdk_accel_error.so.2.0 00:04:01.162 SO libspdk_accel_ioat.so.6.0 00:04:01.162 SO libspdk_scheduler_dynamic.so.4.0 00:04:01.162 LIB libspdk_accel_iaa.a 00:04:01.162 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:01.162 SYMLINK libspdk_keyring_file.so 00:04:01.162 SYMLINK libspdk_keyring_linux.so 00:04:01.162 SYMLINK libspdk_scheduler_gscheduler.so 00:04:01.162 SO libspdk_accel_iaa.so.3.0 00:04:01.162 SYMLINK libspdk_accel_error.so 00:04:01.162 LIB libspdk_accel_dsa.a 00:04:01.162 SYMLINK libspdk_scheduler_dynamic.so 00:04:01.162 LIB libspdk_blob_bdev.a 00:04:01.162 SYMLINK libspdk_accel_ioat.so 00:04:01.162 SO libspdk_accel_dsa.so.5.0 00:04:01.162 SO libspdk_blob_bdev.so.11.0 00:04:01.162 SYMLINK libspdk_accel_iaa.so 00:04:01.162 SYMLINK libspdk_blob_bdev.so 00:04:01.162 SYMLINK libspdk_accel_dsa.so 00:04:01.425 LIB libspdk_vfu_device.a 00:04:01.425 SO libspdk_vfu_device.so.3.0 00:04:01.425 CC module/bdev/error/vbdev_error.o 00:04:01.425 CC module/bdev/delay/vbdev_delay.o 00:04:01.425 CC module/bdev/lvol/vbdev_lvol.o 00:04:01.425 CC module/bdev/malloc/bdev_malloc.o 00:04:01.425 CC module/bdev/aio/bdev_aio.o 00:04:01.425 CC module/bdev/error/vbdev_error_rpc.o 00:04:01.425 CC module/bdev/passthru/vbdev_passthru.o 00:04:01.425 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:01.425 CC module/bdev/nvme/bdev_nvme.o 00:04:01.425 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:01.425 CC module/bdev/aio/bdev_aio_rpc.o 00:04:01.425 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:01.425 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:01.425 CC module/blobfs/bdev/blobfs_bdev.o 00:04:01.425 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:01.426 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:01.426 CC module/bdev/gpt/gpt.o 00:04:01.426 CC module/bdev/iscsi/bdev_iscsi.o 00:04:01.426 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:01.426 CC module/bdev/nvme/nvme_rpc.o 00:04:01.426 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:01.426 CC module/bdev/gpt/vbdev_gpt.o 00:04:01.426 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:01.426 CC module/bdev/nvme/bdev_mdns_client.o 00:04:01.426 CC module/bdev/null/bdev_null.o 00:04:01.426 CC module/bdev/split/vbdev_split.o 00:04:01.426 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:01.426 CC module/bdev/split/vbdev_split_rpc.o 00:04:01.426 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:01.426 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:01.426 CC module/bdev/raid/bdev_raid.o 00:04:01.426 CC module/bdev/null/bdev_null_rpc.o 00:04:01.426 CC module/bdev/ftl/bdev_ftl.o 00:04:01.426 CC module/bdev/nvme/vbdev_opal.o 00:04:01.426 CC module/bdev/raid/bdev_raid_rpc.o 00:04:01.426 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:01.426 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:01.426 CC module/bdev/raid/bdev_raid_sb.o 00:04:01.426 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:01.426 CC module/bdev/raid/raid1.o 00:04:01.426 CC module/bdev/raid/raid0.o 00:04:01.426 CC module/bdev/raid/concat.o 00:04:01.683 SYMLINK libspdk_vfu_device.so 00:04:01.683 LIB libspdk_sock_posix.a 00:04:01.683 SO libspdk_sock_posix.so.6.0 00:04:01.942 LIB libspdk_blobfs_bdev.a 00:04:01.942 SO libspdk_blobfs_bdev.so.6.0 00:04:01.942 SYMLINK libspdk_sock_posix.so 00:04:01.942 SYMLINK libspdk_blobfs_bdev.so 00:04:01.942 LIB libspdk_bdev_split.a 00:04:01.942 LIB libspdk_bdev_zone_block.a 00:04:01.942 SO libspdk_bdev_split.so.6.0 00:04:01.942 LIB libspdk_bdev_error.a 00:04:01.942 LIB libspdk_bdev_ftl.a 00:04:01.942 LIB libspdk_bdev_null.a 00:04:01.942 SO libspdk_bdev_zone_block.so.6.0 00:04:01.942 SO libspdk_bdev_error.so.6.0 00:04:01.942 SO libspdk_bdev_ftl.so.6.0 00:04:01.942 LIB libspdk_bdev_gpt.a 00:04:01.942 SO libspdk_bdev_null.so.6.0 00:04:01.942 SYMLINK libspdk_bdev_split.so 00:04:01.942 LIB libspdk_bdev_passthru.a 00:04:01.942 SO libspdk_bdev_gpt.so.6.0 00:04:02.200 SYMLINK libspdk_bdev_zone_block.so 00:04:02.200 SYMLINK libspdk_bdev_error.so 00:04:02.200 LIB libspdk_bdev_iscsi.a 00:04:02.200 SYMLINK libspdk_bdev_ftl.so 00:04:02.200 SO libspdk_bdev_passthru.so.6.0 00:04:02.200 LIB libspdk_bdev_malloc.a 00:04:02.200 SYMLINK libspdk_bdev_null.so 00:04:02.200 LIB libspdk_bdev_aio.a 00:04:02.200 SO libspdk_bdev_iscsi.so.6.0 00:04:02.200 SYMLINK libspdk_bdev_gpt.so 00:04:02.200 SO libspdk_bdev_malloc.so.6.0 00:04:02.200 SO libspdk_bdev_aio.so.6.0 00:04:02.200 LIB libspdk_bdev_delay.a 00:04:02.200 SYMLINK libspdk_bdev_passthru.so 00:04:02.200 SYMLINK libspdk_bdev_iscsi.so 00:04:02.200 SO libspdk_bdev_delay.so.6.0 00:04:02.200 SYMLINK libspdk_bdev_malloc.so 00:04:02.200 SYMLINK libspdk_bdev_aio.so 00:04:02.200 LIB libspdk_bdev_lvol.a 00:04:02.200 SO libspdk_bdev_lvol.so.6.0 00:04:02.200 SYMLINK libspdk_bdev_delay.so 00:04:02.200 SYMLINK libspdk_bdev_lvol.so 00:04:02.200 LIB libspdk_bdev_virtio.a 00:04:02.458 SO libspdk_bdev_virtio.so.6.0 00:04:02.458 SYMLINK libspdk_bdev_virtio.so 00:04:02.716 LIB libspdk_bdev_raid.a 00:04:02.716 SO libspdk_bdev_raid.so.6.0 00:04:02.974 SYMLINK libspdk_bdev_raid.so 00:04:03.909 LIB libspdk_bdev_nvme.a 00:04:03.909 SO libspdk_bdev_nvme.so.7.0 00:04:04.168 SYMLINK libspdk_bdev_nvme.so 00:04:04.426 CC module/event/subsystems/sock/sock.o 00:04:04.426 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:04.426 CC module/event/subsystems/vmd/vmd.o 00:04:04.426 CC module/event/subsystems/iobuf/iobuf.o 00:04:04.426 CC module/event/subsystems/keyring/keyring.o 00:04:04.426 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:04.426 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:04.426 CC module/event/subsystems/scheduler/scheduler.o 00:04:04.426 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:04.426 LIB libspdk_event_keyring.a 00:04:04.685 LIB libspdk_event_sock.a 00:04:04.685 LIB libspdk_event_vhost_blk.a 00:04:04.685 LIB libspdk_event_vmd.a 00:04:04.685 LIB libspdk_event_vfu_tgt.a 00:04:04.685 LIB libspdk_event_scheduler.a 00:04:04.685 SO libspdk_event_keyring.so.1.0 00:04:04.685 LIB libspdk_event_iobuf.a 00:04:04.685 SO libspdk_event_sock.so.5.0 00:04:04.685 SO libspdk_event_vhost_blk.so.3.0 00:04:04.685 SO libspdk_event_vfu_tgt.so.3.0 00:04:04.685 SO libspdk_event_scheduler.so.4.0 00:04:04.685 SO libspdk_event_vmd.so.6.0 00:04:04.685 SO libspdk_event_iobuf.so.3.0 00:04:04.685 SYMLINK libspdk_event_keyring.so 00:04:04.685 SYMLINK libspdk_event_sock.so 00:04:04.685 SYMLINK libspdk_event_vhost_blk.so 00:04:04.685 SYMLINK libspdk_event_vfu_tgt.so 00:04:04.685 SYMLINK libspdk_event_scheduler.so 00:04:04.685 SYMLINK libspdk_event_vmd.so 00:04:04.685 SYMLINK libspdk_event_iobuf.so 00:04:04.943 CC module/event/subsystems/accel/accel.o 00:04:04.943 LIB libspdk_event_accel.a 00:04:04.943 SO libspdk_event_accel.so.6.0 00:04:04.943 SYMLINK libspdk_event_accel.so 00:04:05.201 CC module/event/subsystems/bdev/bdev.o 00:04:05.459 LIB libspdk_event_bdev.a 00:04:05.459 SO libspdk_event_bdev.so.6.0 00:04:05.459 SYMLINK libspdk_event_bdev.so 00:04:05.718 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:05.718 CC module/event/subsystems/nbd/nbd.o 00:04:05.718 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:05.718 CC module/event/subsystems/ublk/ublk.o 00:04:05.718 CC module/event/subsystems/scsi/scsi.o 00:04:05.718 LIB libspdk_event_nbd.a 00:04:05.718 LIB libspdk_event_ublk.a 00:04:05.718 LIB libspdk_event_scsi.a 00:04:05.718 SO libspdk_event_ublk.so.3.0 00:04:05.718 SO libspdk_event_nbd.so.6.0 00:04:05.718 SO libspdk_event_scsi.so.6.0 00:04:05.976 SYMLINK libspdk_event_nbd.so 00:04:05.976 SYMLINK libspdk_event_ublk.so 00:04:05.976 SYMLINK libspdk_event_scsi.so 00:04:05.976 LIB libspdk_event_nvmf.a 00:04:05.976 SO libspdk_event_nvmf.so.6.0 00:04:05.976 SYMLINK libspdk_event_nvmf.so 00:04:05.976 CC module/event/subsystems/iscsi/iscsi.o 00:04:05.976 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:06.235 LIB libspdk_event_vhost_scsi.a 00:04:06.235 LIB libspdk_event_iscsi.a 00:04:06.235 SO libspdk_event_vhost_scsi.so.3.0 00:04:06.235 SO libspdk_event_iscsi.so.6.0 00:04:06.235 SYMLINK libspdk_event_vhost_scsi.so 00:04:06.235 SYMLINK libspdk_event_iscsi.so 00:04:06.499 SO libspdk.so.6.0 00:04:06.499 SYMLINK libspdk.so 00:04:06.499 TEST_HEADER include/spdk/accel.h 00:04:06.499 TEST_HEADER include/spdk/accel_module.h 00:04:06.499 TEST_HEADER include/spdk/assert.h 00:04:06.499 CXX app/trace/trace.o 00:04:06.499 TEST_HEADER include/spdk/barrier.h 00:04:06.499 CC app/spdk_top/spdk_top.o 00:04:06.499 TEST_HEADER include/spdk/base64.h 00:04:06.499 TEST_HEADER include/spdk/bdev.h 00:04:06.499 TEST_HEADER include/spdk/bdev_module.h 00:04:06.760 CC app/trace_record/trace_record.o 00:04:06.760 CC app/spdk_nvme_perf/perf.o 00:04:06.760 TEST_HEADER include/spdk/bdev_zone.h 00:04:06.760 TEST_HEADER include/spdk/bit_array.h 00:04:06.760 CC test/rpc_client/rpc_client_test.o 00:04:06.760 CC app/spdk_nvme_discover/discovery_aer.o 00:04:06.760 TEST_HEADER include/spdk/bit_pool.h 00:04:06.760 CC app/spdk_nvme_identify/identify.o 00:04:06.760 CC app/spdk_lspci/spdk_lspci.o 00:04:06.760 TEST_HEADER include/spdk/blob_bdev.h 00:04:06.760 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:06.760 TEST_HEADER include/spdk/blobfs.h 00:04:06.760 TEST_HEADER include/spdk/blob.h 00:04:06.760 TEST_HEADER include/spdk/conf.h 00:04:06.760 TEST_HEADER include/spdk/config.h 00:04:06.760 TEST_HEADER include/spdk/cpuset.h 00:04:06.760 TEST_HEADER include/spdk/crc16.h 00:04:06.760 TEST_HEADER include/spdk/crc32.h 00:04:06.760 TEST_HEADER include/spdk/crc64.h 00:04:06.760 TEST_HEADER include/spdk/dif.h 00:04:06.760 TEST_HEADER include/spdk/dma.h 00:04:06.760 TEST_HEADER include/spdk/endian.h 00:04:06.760 TEST_HEADER include/spdk/env_dpdk.h 00:04:06.760 TEST_HEADER include/spdk/env.h 00:04:06.760 TEST_HEADER include/spdk/event.h 00:04:06.760 TEST_HEADER include/spdk/fd_group.h 00:04:06.760 CC app/spdk_dd/spdk_dd.o 00:04:06.760 TEST_HEADER include/spdk/fd.h 00:04:06.760 TEST_HEADER include/spdk/file.h 00:04:06.760 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:06.760 TEST_HEADER include/spdk/ftl.h 00:04:06.760 TEST_HEADER include/spdk/gpt_spec.h 00:04:06.760 TEST_HEADER include/spdk/hexlify.h 00:04:06.760 CC app/nvmf_tgt/nvmf_main.o 00:04:06.760 TEST_HEADER include/spdk/histogram_data.h 00:04:06.760 TEST_HEADER include/spdk/idxd.h 00:04:06.760 CC app/vhost/vhost.o 00:04:06.760 TEST_HEADER include/spdk/idxd_spec.h 00:04:06.760 TEST_HEADER include/spdk/init.h 00:04:06.760 CC app/iscsi_tgt/iscsi_tgt.o 00:04:06.760 TEST_HEADER include/spdk/ioat.h 00:04:06.760 TEST_HEADER include/spdk/ioat_spec.h 00:04:06.760 TEST_HEADER include/spdk/iscsi_spec.h 00:04:06.760 TEST_HEADER include/spdk/json.h 00:04:06.760 TEST_HEADER include/spdk/jsonrpc.h 00:04:06.760 TEST_HEADER include/spdk/keyring.h 00:04:06.760 TEST_HEADER include/spdk/keyring_module.h 00:04:06.760 TEST_HEADER include/spdk/likely.h 00:04:06.760 TEST_HEADER include/spdk/log.h 00:04:06.760 TEST_HEADER include/spdk/lvol.h 00:04:06.760 CC examples/ioat/verify/verify.o 00:04:06.760 CC test/thread/poller_perf/poller_perf.o 00:04:06.760 TEST_HEADER include/spdk/memory.h 00:04:06.760 CC app/spdk_tgt/spdk_tgt.o 00:04:06.760 CC test/nvme/aer/aer.o 00:04:06.760 CC examples/sock/hello_world/hello_sock.o 00:04:06.760 TEST_HEADER include/spdk/mmio.h 00:04:06.760 CC app/fio/nvme/fio_plugin.o 00:04:06.760 CC test/app/histogram_perf/histogram_perf.o 00:04:06.760 TEST_HEADER include/spdk/nbd.h 00:04:06.760 CC examples/nvme/reconnect/reconnect.o 00:04:06.760 TEST_HEADER include/spdk/notify.h 00:04:06.760 CC test/event/reactor/reactor.o 00:04:06.760 CC examples/vmd/lsvmd/lsvmd.o 00:04:06.760 TEST_HEADER include/spdk/nvme.h 00:04:06.760 CC test/app/jsoncat/jsoncat.o 00:04:06.760 CC test/event/event_perf/event_perf.o 00:04:06.760 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:06.760 CC examples/accel/perf/accel_perf.o 00:04:06.760 TEST_HEADER include/spdk/nvme_intel.h 00:04:06.760 CC examples/nvme/hello_world/hello_world.o 00:04:06.760 CC examples/idxd/perf/perf.o 00:04:06.760 CC test/nvme/reset/reset.o 00:04:06.760 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:06.760 CC examples/ioat/perf/perf.o 00:04:06.760 CC examples/util/zipf/zipf.o 00:04:06.760 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:06.760 TEST_HEADER include/spdk/nvme_spec.h 00:04:06.760 TEST_HEADER include/spdk/nvme_zns.h 00:04:06.760 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:06.760 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:06.760 TEST_HEADER include/spdk/nvmf.h 00:04:06.760 TEST_HEADER include/spdk/nvmf_spec.h 00:04:06.760 TEST_HEADER include/spdk/nvmf_transport.h 00:04:06.760 CC examples/blob/hello_world/hello_blob.o 00:04:06.760 TEST_HEADER include/spdk/opal.h 00:04:06.760 CC test/blobfs/mkfs/mkfs.o 00:04:06.760 TEST_HEADER include/spdk/opal_spec.h 00:04:06.760 TEST_HEADER include/spdk/pci_ids.h 00:04:06.760 CC examples/bdev/hello_world/hello_bdev.o 00:04:06.760 TEST_HEADER include/spdk/pipe.h 00:04:06.760 TEST_HEADER include/spdk/queue.h 00:04:06.760 CC examples/blob/cli/blobcli.o 00:04:06.760 CC examples/thread/thread/thread_ex.o 00:04:06.760 TEST_HEADER include/spdk/reduce.h 00:04:06.760 CC test/bdev/bdevio/bdevio.o 00:04:06.760 CC test/accel/dif/dif.o 00:04:06.760 TEST_HEADER include/spdk/rpc.h 00:04:06.760 CC examples/bdev/bdevperf/bdevperf.o 00:04:06.760 TEST_HEADER include/spdk/scheduler.h 00:04:06.760 CC app/fio/bdev/fio_plugin.o 00:04:06.760 CC test/app/bdev_svc/bdev_svc.o 00:04:06.760 TEST_HEADER include/spdk/scsi.h 00:04:06.760 CC test/dma/test_dma/test_dma.o 00:04:06.760 TEST_HEADER include/spdk/scsi_spec.h 00:04:06.760 TEST_HEADER include/spdk/sock.h 00:04:06.760 TEST_HEADER include/spdk/stdinc.h 00:04:07.044 TEST_HEADER include/spdk/string.h 00:04:07.044 TEST_HEADER include/spdk/thread.h 00:04:07.044 TEST_HEADER include/spdk/trace.h 00:04:07.044 CC examples/nvmf/nvmf/nvmf.o 00:04:07.045 TEST_HEADER include/spdk/trace_parser.h 00:04:07.045 TEST_HEADER include/spdk/tree.h 00:04:07.045 TEST_HEADER include/spdk/ublk.h 00:04:07.045 TEST_HEADER include/spdk/util.h 00:04:07.045 TEST_HEADER include/spdk/uuid.h 00:04:07.045 TEST_HEADER include/spdk/version.h 00:04:07.045 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:07.045 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:07.045 TEST_HEADER include/spdk/vhost.h 00:04:07.045 TEST_HEADER include/spdk/vmd.h 00:04:07.045 TEST_HEADER include/spdk/xor.h 00:04:07.045 TEST_HEADER include/spdk/zipf.h 00:04:07.045 LINK spdk_lspci 00:04:07.045 CXX test/cpp_headers/accel.o 00:04:07.045 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:07.045 CC test/lvol/esnap/esnap.o 00:04:07.045 CC test/env/mem_callbacks/mem_callbacks.o 00:04:07.045 LINK rpc_client_test 00:04:07.045 LINK spdk_nvme_discover 00:04:07.045 LINK interrupt_tgt 00:04:07.045 LINK jsoncat 00:04:07.045 LINK poller_perf 00:04:07.045 LINK lsvmd 00:04:07.045 LINK reactor 00:04:07.045 LINK histogram_perf 00:04:07.045 LINK event_perf 00:04:07.045 LINK nvmf_tgt 00:04:07.045 LINK vhost 00:04:07.045 LINK zipf 00:04:07.045 LINK spdk_trace_record 00:04:07.308 LINK iscsi_tgt 00:04:07.308 LINK verify 00:04:07.308 LINK spdk_tgt 00:04:07.308 LINK mkfs 00:04:07.308 LINK ioat_perf 00:04:07.308 LINK bdev_svc 00:04:07.308 LINK hello_world 00:04:07.308 LINK hello_sock 00:04:07.308 LINK aer 00:04:07.308 LINK hello_blob 00:04:07.308 LINK thread 00:04:07.308 LINK hello_bdev 00:04:07.308 LINK reset 00:04:07.308 CXX test/cpp_headers/accel_module.o 00:04:07.308 LINK mem_callbacks 00:04:07.308 CC test/event/reactor_perf/reactor_perf.o 00:04:07.576 LINK spdk_dd 00:04:07.576 CXX test/cpp_headers/assert.o 00:04:07.576 LINK idxd_perf 00:04:07.576 LINK nvmf 00:04:07.576 LINK reconnect 00:04:07.576 CC examples/vmd/led/led.o 00:04:07.576 LINK spdk_trace 00:04:07.576 CC test/env/vtophys/vtophys.o 00:04:07.576 CC examples/nvme/arbitration/arbitration.o 00:04:07.576 CC test/nvme/sgl/sgl.o 00:04:07.576 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:07.576 CC examples/nvme/hotplug/hotplug.o 00:04:07.576 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:07.576 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:07.576 LINK bdevio 00:04:07.576 CXX test/cpp_headers/barrier.o 00:04:07.576 LINK test_dma 00:04:07.576 CC test/event/app_repeat/app_repeat.o 00:04:07.576 CC examples/nvme/abort/abort.o 00:04:07.576 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:07.576 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:07.837 LINK dif 00:04:07.837 CC test/app/stub/stub.o 00:04:07.837 LINK nvme_manage 00:04:07.837 CC test/env/memory/memory_ut.o 00:04:07.838 LINK accel_perf 00:04:07.838 LINK reactor_perf 00:04:07.838 CXX test/cpp_headers/base64.o 00:04:07.838 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:07.838 CXX test/cpp_headers/bdev.o 00:04:07.838 CC test/nvme/e2edp/nvme_dp.o 00:04:07.838 CXX test/cpp_headers/bdev_module.o 00:04:07.838 LINK nvme_fuzz 00:04:07.838 CC test/nvme/overhead/overhead.o 00:04:07.838 CXX test/cpp_headers/bdev_zone.o 00:04:07.838 CXX test/cpp_headers/bit_array.o 00:04:07.838 LINK led 00:04:07.838 CXX test/cpp_headers/bit_pool.o 00:04:07.838 CC test/env/pci/pci_ut.o 00:04:07.838 CC test/nvme/err_injection/err_injection.o 00:04:07.838 LINK spdk_nvme 00:04:07.838 LINK blobcli 00:04:07.838 CC test/event/scheduler/scheduler.o 00:04:07.838 CXX test/cpp_headers/blob_bdev.o 00:04:07.838 LINK vtophys 00:04:07.838 CXX test/cpp_headers/blobfs_bdev.o 00:04:07.838 CC test/nvme/reserve/reserve.o 00:04:07.838 CC test/nvme/startup/startup.o 00:04:07.838 LINK spdk_bdev 00:04:07.838 CC test/nvme/simple_copy/simple_copy.o 00:04:08.101 LINK app_repeat 00:04:08.101 CC test/nvme/connect_stress/connect_stress.o 00:04:08.101 CXX test/cpp_headers/blobfs.o 00:04:08.101 LINK env_dpdk_post_init 00:04:08.101 CXX test/cpp_headers/blob.o 00:04:08.102 LINK cmb_copy 00:04:08.102 LINK hotplug 00:04:08.102 CC test/nvme/boot_partition/boot_partition.o 00:04:08.102 LINK stub 00:04:08.102 LINK sgl 00:04:08.102 LINK pmr_persistence 00:04:08.102 CXX test/cpp_headers/conf.o 00:04:08.102 CXX test/cpp_headers/config.o 00:04:08.102 CXX test/cpp_headers/cpuset.o 00:04:08.102 CC test/nvme/compliance/nvme_compliance.o 00:04:08.102 CXX test/cpp_headers/crc16.o 00:04:08.102 LINK spdk_nvme_perf 00:04:08.102 CXX test/cpp_headers/crc32.o 00:04:08.102 CXX test/cpp_headers/crc64.o 00:04:08.102 CXX test/cpp_headers/dif.o 00:04:08.102 CC test/nvme/fused_ordering/fused_ordering.o 00:04:08.102 CXX test/cpp_headers/dma.o 00:04:08.102 CXX test/cpp_headers/endian.o 00:04:08.102 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:08.102 CXX test/cpp_headers/env_dpdk.o 00:04:08.102 LINK arbitration 00:04:08.102 CXX test/cpp_headers/env.o 00:04:08.102 CC test/nvme/fdp/fdp.o 00:04:08.368 CXX test/cpp_headers/event.o 00:04:08.368 CXX test/cpp_headers/fd_group.o 00:04:08.368 LINK err_injection 00:04:08.368 CC test/nvme/cuse/cuse.o 00:04:08.368 LINK startup 00:04:08.368 LINK spdk_nvme_identify 00:04:08.368 CXX test/cpp_headers/fd.o 00:04:08.368 LINK nvme_dp 00:04:08.368 LINK reserve 00:04:08.368 CXX test/cpp_headers/file.o 00:04:08.368 CXX test/cpp_headers/ftl.o 00:04:08.368 LINK spdk_top 00:04:08.368 LINK scheduler 00:04:08.368 CXX test/cpp_headers/gpt_spec.o 00:04:08.368 LINK overhead 00:04:08.368 CXX test/cpp_headers/hexlify.o 00:04:08.368 LINK abort 00:04:08.368 LINK bdevperf 00:04:08.368 LINK connect_stress 00:04:08.368 CXX test/cpp_headers/histogram_data.o 00:04:08.368 CXX test/cpp_headers/idxd.o 00:04:08.368 CXX test/cpp_headers/idxd_spec.o 00:04:08.368 CXX test/cpp_headers/init.o 00:04:08.368 CXX test/cpp_headers/ioat.o 00:04:08.368 CXX test/cpp_headers/ioat_spec.o 00:04:08.368 LINK simple_copy 00:04:08.368 LINK boot_partition 00:04:08.368 CXX test/cpp_headers/iscsi_spec.o 00:04:08.368 CXX test/cpp_headers/json.o 00:04:08.368 LINK vhost_fuzz 00:04:08.631 CXX test/cpp_headers/jsonrpc.o 00:04:08.631 CXX test/cpp_headers/keyring.o 00:04:08.631 CXX test/cpp_headers/keyring_module.o 00:04:08.631 CXX test/cpp_headers/likely.o 00:04:08.631 CXX test/cpp_headers/log.o 00:04:08.631 CXX test/cpp_headers/lvol.o 00:04:08.631 CXX test/cpp_headers/memory.o 00:04:08.631 CXX test/cpp_headers/mmio.o 00:04:08.631 CXX test/cpp_headers/nbd.o 00:04:08.631 CXX test/cpp_headers/notify.o 00:04:08.631 CXX test/cpp_headers/nvme.o 00:04:08.631 LINK doorbell_aers 00:04:08.631 LINK pci_ut 00:04:08.631 CXX test/cpp_headers/nvme_intel.o 00:04:08.631 CXX test/cpp_headers/nvme_ocssd.o 00:04:08.631 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:08.631 CXX test/cpp_headers/nvme_spec.o 00:04:08.631 CXX test/cpp_headers/nvme_zns.o 00:04:08.631 LINK fused_ordering 00:04:08.631 CXX test/cpp_headers/nvmf_cmd.o 00:04:08.631 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:08.631 CXX test/cpp_headers/nvmf.o 00:04:08.631 CXX test/cpp_headers/nvmf_spec.o 00:04:08.631 CXX test/cpp_headers/nvmf_transport.o 00:04:08.631 CXX test/cpp_headers/opal.o 00:04:08.631 CXX test/cpp_headers/opal_spec.o 00:04:08.631 CXX test/cpp_headers/pci_ids.o 00:04:08.631 CXX test/cpp_headers/pipe.o 00:04:08.631 CXX test/cpp_headers/queue.o 00:04:08.631 CXX test/cpp_headers/reduce.o 00:04:08.892 CXX test/cpp_headers/rpc.o 00:04:08.892 LINK nvme_compliance 00:04:08.892 CXX test/cpp_headers/scheduler.o 00:04:08.892 CXX test/cpp_headers/scsi.o 00:04:08.892 CXX test/cpp_headers/scsi_spec.o 00:04:08.892 CXX test/cpp_headers/sock.o 00:04:08.892 CXX test/cpp_headers/stdinc.o 00:04:08.892 CXX test/cpp_headers/string.o 00:04:08.892 CXX test/cpp_headers/thread.o 00:04:08.892 CXX test/cpp_headers/trace.o 00:04:08.892 CXX test/cpp_headers/trace_parser.o 00:04:08.892 CXX test/cpp_headers/tree.o 00:04:08.892 CXX test/cpp_headers/ublk.o 00:04:08.892 CXX test/cpp_headers/util.o 00:04:08.892 CXX test/cpp_headers/uuid.o 00:04:08.892 CXX test/cpp_headers/version.o 00:04:08.892 CXX test/cpp_headers/vfio_user_pci.o 00:04:08.892 CXX test/cpp_headers/vfio_user_spec.o 00:04:08.892 CXX test/cpp_headers/vhost.o 00:04:08.892 CXX test/cpp_headers/vmd.o 00:04:08.892 CXX test/cpp_headers/xor.o 00:04:08.892 LINK fdp 00:04:08.892 CXX test/cpp_headers/zipf.o 00:04:09.151 LINK memory_ut 00:04:10.084 LINK iscsi_fuzz 00:04:10.084 LINK cuse 00:04:13.374 LINK esnap 00:04:13.374 00:04:13.374 real 0m40.524s 00:04:13.374 user 7m35.841s 00:04:13.374 sys 1m48.449s 00:04:13.374 22:34:05 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:04:13.374 22:34:05 make -- common/autotest_common.sh@10 -- $ set +x 00:04:13.374 ************************************ 00:04:13.374 END TEST make 00:04:13.374 ************************************ 00:04:13.374 22:34:05 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:13.374 22:34:05 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:13.374 22:34:05 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:13.374 22:34:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:13.374 22:34:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:13.374 22:34:05 -- pm/common@44 -- $ pid=3302786 00:04:13.374 22:34:05 -- pm/common@50 -- $ kill -TERM 3302786 00:04:13.374 22:34:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:13.374 22:34:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:13.374 22:34:05 -- pm/common@44 -- $ pid=3302788 00:04:13.374 22:34:05 -- pm/common@50 -- $ kill -TERM 3302788 00:04:13.374 22:34:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:13.374 22:34:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:13.374 22:34:05 -- pm/common@44 -- $ pid=3302790 00:04:13.374 22:34:05 -- pm/common@50 -- $ kill -TERM 3302790 00:04:13.374 22:34:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:13.374 22:34:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:13.374 22:34:05 -- pm/common@44 -- $ pid=3302818 00:04:13.374 22:34:05 -- pm/common@50 -- $ sudo -E kill -TERM 3302818 00:04:13.374 22:34:05 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:13.374 22:34:05 -- nvmf/common.sh@7 -- # uname -s 00:04:13.374 22:34:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:13.374 22:34:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:13.374 22:34:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:13.374 22:34:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:13.374 22:34:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:13.374 22:34:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:13.374 22:34:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:13.374 22:34:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:13.374 22:34:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:13.374 22:34:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:13.374 22:34:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:13.374 22:34:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:13.374 22:34:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:13.374 22:34:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:13.374 22:34:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:13.374 22:34:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:13.374 22:34:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:13.374 22:34:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:13.374 22:34:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:13.374 22:34:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:13.374 22:34:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.374 22:34:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.374 22:34:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.374 22:34:05 -- paths/export.sh@5 -- # export PATH 00:04:13.374 22:34:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.374 22:34:05 -- nvmf/common.sh@47 -- # : 0 00:04:13.374 22:34:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:13.374 22:34:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:13.374 22:34:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:13.374 22:34:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:13.374 22:34:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:13.374 22:34:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:13.374 22:34:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:13.374 22:34:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:13.374 22:34:05 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:13.374 22:34:05 -- spdk/autotest.sh@32 -- # uname -s 00:04:13.374 22:34:05 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:13.374 22:34:05 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:13.374 22:34:05 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:13.374 22:34:05 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:13.374 22:34:05 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:13.374 22:34:05 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:13.374 22:34:05 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:13.374 22:34:05 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:13.374 22:34:05 -- spdk/autotest.sh@48 -- # udevadm_pid=3379192 00:04:13.374 22:34:05 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:13.374 22:34:05 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:13.374 22:34:05 -- pm/common@17 -- # local monitor 00:04:13.374 22:34:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:13.374 22:34:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:13.374 22:34:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:13.374 22:34:05 -- pm/common@21 -- # date +%s 00:04:13.374 22:34:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:13.374 22:34:05 -- pm/common@21 -- # date +%s 00:04:13.374 22:34:05 -- pm/common@25 -- # sleep 1 00:04:13.374 22:34:05 -- pm/common@21 -- # date +%s 00:04:13.374 22:34:05 -- pm/common@21 -- # date +%s 00:04:13.374 22:34:05 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1722026045 00:04:13.374 22:34:05 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1722026045 00:04:13.374 22:34:05 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1722026045 00:04:13.374 22:34:05 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1722026045 00:04:13.374 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1722026045_collect-vmstat.pm.log 00:04:13.374 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1722026045_collect-cpu-load.pm.log 00:04:13.374 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1722026045_collect-cpu-temp.pm.log 00:04:13.374 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1722026045_collect-bmc-pm.bmc.pm.log 00:04:14.309 22:34:06 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:14.309 22:34:06 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:14.309 22:34:06 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:14.309 22:34:06 -- common/autotest_common.sh@10 -- # set +x 00:04:14.309 22:34:06 -- spdk/autotest.sh@59 -- # create_test_list 00:04:14.309 22:34:06 -- common/autotest_common.sh@744 -- # xtrace_disable 00:04:14.309 22:34:06 -- common/autotest_common.sh@10 -- # set +x 00:04:14.309 22:34:06 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:14.309 22:34:06 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:14.309 22:34:06 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:14.309 22:34:06 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:14.309 22:34:06 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:14.309 22:34:06 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:14.309 22:34:06 -- common/autotest_common.sh@1451 -- # uname 00:04:14.309 22:34:06 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:04:14.309 22:34:06 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:14.309 22:34:06 -- common/autotest_common.sh@1471 -- # uname 00:04:14.309 22:34:06 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:04:14.309 22:34:06 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:14.309 22:34:06 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:14.309 22:34:06 -- spdk/autotest.sh@72 -- # hash lcov 00:04:14.309 22:34:06 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:14.309 22:34:06 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:14.309 --rc lcov_branch_coverage=1 00:04:14.309 --rc lcov_function_coverage=1 00:04:14.309 --rc genhtml_branch_coverage=1 00:04:14.309 --rc genhtml_function_coverage=1 00:04:14.309 --rc genhtml_legend=1 00:04:14.309 --rc geninfo_all_blocks=1 00:04:14.309 ' 00:04:14.309 22:34:06 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:14.309 --rc lcov_branch_coverage=1 00:04:14.309 --rc lcov_function_coverage=1 00:04:14.309 --rc genhtml_branch_coverage=1 00:04:14.309 --rc genhtml_function_coverage=1 00:04:14.309 --rc genhtml_legend=1 00:04:14.309 --rc geninfo_all_blocks=1 00:04:14.309 ' 00:04:14.309 22:34:06 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:14.309 --rc lcov_branch_coverage=1 00:04:14.309 --rc lcov_function_coverage=1 00:04:14.309 --rc genhtml_branch_coverage=1 00:04:14.309 --rc genhtml_function_coverage=1 00:04:14.309 --rc genhtml_legend=1 00:04:14.309 --rc geninfo_all_blocks=1 00:04:14.309 --no-external' 00:04:14.309 22:34:06 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:14.309 --rc lcov_branch_coverage=1 00:04:14.309 --rc lcov_function_coverage=1 00:04:14.309 --rc genhtml_branch_coverage=1 00:04:14.309 --rc genhtml_function_coverage=1 00:04:14.309 --rc genhtml_legend=1 00:04:14.309 --rc geninfo_all_blocks=1 00:04:14.309 --no-external' 00:04:14.309 22:34:06 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:14.568 lcov: LCOV version 1.14 00:04:14.568 22:34:06 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:29.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:29.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:44.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:44.342 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:44.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:44.342 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:44.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:44.342 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:44.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:44.342 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:44.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:44.342 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:44.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:44.342 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:44.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:44.342 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:44.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:44.342 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:44.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:44.342 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:44.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:44.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:44.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:44.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:44.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:44.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:44.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:44.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:44.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:44.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:44.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:44.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:44.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:44.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:44.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:44.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:44.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:44.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:44.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:44.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:44.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:44.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:44.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:44.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:44.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:44.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:44.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:44.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:44.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:44.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:44.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:44.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:44.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:44.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:44.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:44.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:44.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:44.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:44.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:44.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:44.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:44.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:44.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:44.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:44.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:44.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:44.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:44.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:44.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:44.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:44.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:44.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:44.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:44.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:44.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:44.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:44.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:44.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:47.679 22:34:39 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:47.679 22:34:39 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:47.679 22:34:39 -- common/autotest_common.sh@10 -- # set +x 00:04:47.679 22:34:39 -- spdk/autotest.sh@91 -- # rm -f 00:04:47.679 22:34:39 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:48.247 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:48.247 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:48.247 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:48.247 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:48.247 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:48.247 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:48.247 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:48.505 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:48.505 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:48.505 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:48.505 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:48.505 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:48.505 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:48.505 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:48.505 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:48.505 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:48.505 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:48.505 22:34:40 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:48.505 22:34:40 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:48.505 22:34:40 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:48.505 22:34:40 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:48.505 22:34:40 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:48.505 22:34:40 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:48.505 22:34:40 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:48.505 22:34:40 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:48.505 22:34:40 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:48.505 22:34:40 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:48.505 22:34:40 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:48.505 22:34:40 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:48.505 22:34:40 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:48.505 22:34:40 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:48.505 22:34:40 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:48.765 No valid GPT data, bailing 00:04:48.765 22:34:41 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:48.765 22:34:41 -- scripts/common.sh@391 -- # pt= 00:04:48.765 22:34:41 -- scripts/common.sh@392 -- # return 1 00:04:48.765 22:34:41 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:48.765 1+0 records in 00:04:48.765 1+0 records out 00:04:48.765 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00237749 s, 441 MB/s 00:04:48.765 22:34:41 -- spdk/autotest.sh@118 -- # sync 00:04:48.765 22:34:41 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:48.765 22:34:41 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:48.765 22:34:41 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:50.666 22:34:42 -- spdk/autotest.sh@124 -- # uname -s 00:04:50.666 22:34:42 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:50.666 22:34:42 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:50.666 22:34:42 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:50.666 22:34:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:50.666 22:34:42 -- common/autotest_common.sh@10 -- # set +x 00:04:50.666 ************************************ 00:04:50.666 START TEST setup.sh 00:04:50.666 ************************************ 00:04:50.666 22:34:42 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:50.666 * Looking for test storage... 00:04:50.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:50.666 22:34:42 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:50.666 22:34:42 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:50.666 22:34:42 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:50.666 22:34:42 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:50.666 22:34:42 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:50.666 22:34:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:50.666 ************************************ 00:04:50.666 START TEST acl 00:04:50.666 ************************************ 00:04:50.666 22:34:42 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:50.666 * Looking for test storage... 00:04:50.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:50.666 22:34:42 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:50.666 22:34:42 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:50.666 22:34:42 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:50.666 22:34:42 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:50.666 22:34:42 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:50.666 22:34:42 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:50.666 22:34:42 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:50.666 22:34:42 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:50.666 22:34:42 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:50.666 22:34:42 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:50.666 22:34:42 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:50.667 22:34:42 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:50.667 22:34:42 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:50.667 22:34:42 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:50.667 22:34:42 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:50.667 22:34:42 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:52.041 22:34:44 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:52.041 22:34:44 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:52.041 22:34:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:52.041 22:34:44 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:52.041 22:34:44 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.041 22:34:44 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:53.418 Hugepages 00:04:53.418 node hugesize free / total 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:53.418 00:04:53.418 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:53.418 22:34:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:53.419 22:34:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:53.419 22:34:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:53.419 22:34:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:53.419 22:34:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:53.419 22:34:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:53.419 22:34:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:53.419 22:34:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:53.419 22:34:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:53.419 22:34:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:04:53.419 22:34:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:53.419 22:34:45 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:53.419 22:34:45 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:53.419 22:34:45 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:53.419 22:34:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:53.419 22:34:45 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:53.419 22:34:45 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:53.419 22:34:45 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:53.419 22:34:45 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:53.419 22:34:45 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:53.419 ************************************ 00:04:53.419 START TEST denied 00:04:53.419 ************************************ 00:04:53.419 22:34:45 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:04:53.419 22:34:45 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:04:53.419 22:34:45 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:53.419 22:34:45 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.419 22:34:45 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:04:53.419 22:34:45 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:54.796 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:04:54.796 22:34:46 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:04:54.796 22:34:46 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:54.796 22:34:46 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:54.796 22:34:46 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:04:54.796 22:34:46 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:04:54.796 22:34:46 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:54.796 22:34:46 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:54.796 22:34:46 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:54.796 22:34:46 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:54.796 22:34:46 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:57.329 00:04:57.329 real 0m3.602s 00:04:57.329 user 0m1.015s 00:04:57.329 sys 0m1.691s 00:04:57.329 22:34:49 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:57.329 22:34:49 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:57.329 ************************************ 00:04:57.329 END TEST denied 00:04:57.329 ************************************ 00:04:57.329 22:34:49 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:57.329 22:34:49 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:57.329 22:34:49 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:57.329 22:34:49 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:57.329 ************************************ 00:04:57.329 START TEST allowed 00:04:57.329 ************************************ 00:04:57.329 22:34:49 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:04:57.329 22:34:49 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:04:57.329 22:34:49 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:57.329 22:34:49 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:04:57.329 22:34:49 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.329 22:34:49 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:59.233 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:59.233 22:34:51 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:59.233 22:34:51 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:59.233 22:34:51 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:59.233 22:34:51 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:59.233 22:34:51 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:01.135 00:05:01.135 real 0m3.871s 00:05:01.135 user 0m1.018s 00:05:01.135 sys 0m1.687s 00:05:01.135 22:34:53 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:01.135 22:34:53 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:01.135 ************************************ 00:05:01.135 END TEST allowed 00:05:01.135 ************************************ 00:05:01.135 00:05:01.135 real 0m10.264s 00:05:01.135 user 0m3.137s 00:05:01.135 sys 0m5.131s 00:05:01.135 22:34:53 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:01.135 22:34:53 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:01.135 ************************************ 00:05:01.135 END TEST acl 00:05:01.135 ************************************ 00:05:01.135 22:34:53 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:01.135 22:34:53 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:01.136 22:34:53 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:01.136 22:34:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:01.136 ************************************ 00:05:01.136 START TEST hugepages 00:05:01.136 ************************************ 00:05:01.136 22:34:53 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:01.136 * Looking for test storage... 00:05:01.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 40701900 kB' 'MemAvailable: 45245844 kB' 'Buffers: 11312 kB' 'Cached: 13251376 kB' 'SwapCached: 0 kB' 'Active: 9245880 kB' 'Inactive: 4523700 kB' 'Active(anon): 8849280 kB' 'Inactive(anon): 0 kB' 'Active(file): 396600 kB' 'Inactive(file): 4523700 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 510168 kB' 'Mapped: 203988 kB' 'Shmem: 8342388 kB' 'KReclaimable: 234964 kB' 'Slab: 613780 kB' 'SReclaimable: 234964 kB' 'SUnreclaim: 378816 kB' 'KernelStack: 12832 kB' 'PageTables: 8232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562308 kB' 'Committed_AS: 9931972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196484 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 14821376 kB' 'DirectMap1G: 52428800 kB' 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.136 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:01.137 22:34:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:01.138 22:34:53 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:01.138 22:34:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:01.138 22:34:53 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:01.138 22:34:53 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:01.138 22:34:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:01.138 22:34:53 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:01.138 22:34:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:01.138 22:34:53 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:01.138 22:34:53 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:01.138 22:34:53 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:01.138 22:34:53 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:01.138 22:34:53 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:01.138 22:34:53 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:01.138 22:34:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:01.138 ************************************ 00:05:01.138 START TEST default_setup 00:05:01.138 ************************************ 00:05:01.138 22:34:53 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:05:01.138 22:34:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:01.138 22:34:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:01.138 22:34:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:01.138 22:34:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:01.138 22:34:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:01.138 22:34:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:01.138 22:34:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:01.138 22:34:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:01.138 22:34:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:01.138 22:34:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:01.138 22:34:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:01.138 22:34:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:01.138 22:34:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:01.138 22:34:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:01.138 22:34:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:01.138 22:34:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:01.138 22:34:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:01.138 22:34:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:01.138 22:34:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:01.138 22:34:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:01.138 22:34:53 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.138 22:34:53 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:02.073 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:02.073 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:02.073 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:02.073 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:02.073 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:02.073 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:02.073 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:02.073 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:02.073 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:02.073 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:02.073 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:02.073 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:02.073 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:02.073 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:02.073 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:02.331 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:03.273 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:03.273 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:03.273 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:03.273 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:03.273 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:03.273 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:03.273 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:03.273 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:03.273 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:03.273 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:03.273 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:03.273 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:03.273 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:03.273 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:03.273 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.273 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.273 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.273 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.273 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.273 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42801192 kB' 'MemAvailable: 47345136 kB' 'Buffers: 11312 kB' 'Cached: 13251472 kB' 'SwapCached: 0 kB' 'Active: 9264560 kB' 'Inactive: 4523700 kB' 'Active(anon): 8867960 kB' 'Inactive(anon): 0 kB' 'Active(file): 396600 kB' 'Inactive(file): 4523700 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528820 kB' 'Mapped: 204100 kB' 'Shmem: 8342484 kB' 'KReclaimable: 234964 kB' 'Slab: 613420 kB' 'SReclaimable: 234964 kB' 'SUnreclaim: 378456 kB' 'KernelStack: 12816 kB' 'PageTables: 8080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9952800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 14821376 kB' 'DirectMap1G: 52428800 kB' 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.274 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42800324 kB' 'MemAvailable: 47344268 kB' 'Buffers: 11312 kB' 'Cached: 13251472 kB' 'SwapCached: 0 kB' 'Active: 9264676 kB' 'Inactive: 4523700 kB' 'Active(anon): 8868076 kB' 'Inactive(anon): 0 kB' 'Active(file): 396600 kB' 'Inactive(file): 4523700 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528928 kB' 'Mapped: 204092 kB' 'Shmem: 8342484 kB' 'KReclaimable: 234964 kB' 'Slab: 613396 kB' 'SReclaimable: 234964 kB' 'SUnreclaim: 378432 kB' 'KernelStack: 12784 kB' 'PageTables: 8264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9952820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196580 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 14821376 kB' 'DirectMap1G: 52428800 kB' 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.275 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.276 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42800636 kB' 'MemAvailable: 47344580 kB' 'Buffers: 11312 kB' 'Cached: 13251492 kB' 'SwapCached: 0 kB' 'Active: 9264056 kB' 'Inactive: 4523700 kB' 'Active(anon): 8867456 kB' 'Inactive(anon): 0 kB' 'Active(file): 396600 kB' 'Inactive(file): 4523700 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528208 kB' 'Mapped: 203976 kB' 'Shmem: 8342504 kB' 'KReclaimable: 234964 kB' 'Slab: 613380 kB' 'SReclaimable: 234964 kB' 'SUnreclaim: 378416 kB' 'KernelStack: 12752 kB' 'PageTables: 8168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9952840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 14821376 kB' 'DirectMap1G: 52428800 kB' 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.277 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.278 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:03.279 nr_hugepages=1024 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:03.279 resv_hugepages=0 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:03.279 surplus_hugepages=0 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:03.279 anon_hugepages=0 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42800636 kB' 'MemAvailable: 47344580 kB' 'Buffers: 11312 kB' 'Cached: 13251516 kB' 'SwapCached: 0 kB' 'Active: 9264060 kB' 'Inactive: 4523700 kB' 'Active(anon): 8867460 kB' 'Inactive(anon): 0 kB' 'Active(file): 396600 kB' 'Inactive(file): 4523700 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528204 kB' 'Mapped: 203976 kB' 'Shmem: 8342528 kB' 'KReclaimable: 234964 kB' 'Slab: 613380 kB' 'SReclaimable: 234964 kB' 'SUnreclaim: 378416 kB' 'KernelStack: 12752 kB' 'PageTables: 8168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9952864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196612 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 14821376 kB' 'DirectMap1G: 52428800 kB' 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.279 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.280 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20560036 kB' 'MemUsed: 12316904 kB' 'SwapCached: 0 kB' 'Active: 5857880 kB' 'Inactive: 3429848 kB' 'Active(anon): 5585936 kB' 'Inactive(anon): 0 kB' 'Active(file): 271944 kB' 'Inactive(file): 3429848 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9202204 kB' 'Mapped: 89672 kB' 'AnonPages: 88780 kB' 'Shmem: 5500412 kB' 'KernelStack: 6488 kB' 'PageTables: 3004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95108 kB' 'Slab: 301692 kB' 'SReclaimable: 95108 kB' 'SUnreclaim: 206584 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.281 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:03.282 22:34:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:03.283 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:03.283 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:03.283 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:03.283 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:03.283 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:03.283 node0=1024 expecting 1024 00:05:03.283 22:34:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:03.283 00:05:03.283 real 0m2.348s 00:05:03.283 user 0m0.643s 00:05:03.283 sys 0m0.839s 00:05:03.283 22:34:55 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:03.283 22:34:55 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:03.283 ************************************ 00:05:03.283 END TEST default_setup 00:05:03.283 ************************************ 00:05:03.283 22:34:55 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:03.283 22:34:55 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:03.283 22:34:55 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:03.283 22:34:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:03.283 ************************************ 00:05:03.283 START TEST per_node_1G_alloc 00:05:03.283 ************************************ 00:05:03.283 22:34:55 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:05:03.283 22:34:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:03.283 22:34:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:05:03.283 22:34:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:03.283 22:34:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:05:03.283 22:34:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:03.283 22:34:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:05:03.283 22:34:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:03.283 22:34:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:03.283 22:34:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:03.283 22:34:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:05:03.283 22:34:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:05:03.283 22:34:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:03.283 22:34:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:03.283 22:34:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:03.283 22:34:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:03.283 22:34:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:03.283 22:34:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:05:03.283 22:34:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:03.283 22:34:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:03.283 22:34:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:03.283 22:34:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:03.283 22:34:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:03.283 22:34:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:03.283 22:34:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:05:03.283 22:34:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:03.283 22:34:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.283 22:34:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:04.665 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:04.665 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:04.665 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:04.665 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:04.665 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:04.665 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:04.665 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:04.665 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:04.665 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:04.665 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:04.665 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:04.665 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:04.665 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:04.665 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:04.665 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:04.665 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:04.665 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:04.665 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:05:04.665 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:04.665 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:04.665 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:04.665 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:04.665 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:04.665 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:04.665 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:04.665 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:04.665 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:04.665 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:04.665 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:04.665 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:04.665 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.665 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.665 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.665 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.665 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.665 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.665 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.665 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.665 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42776040 kB' 'MemAvailable: 47319984 kB' 'Buffers: 11312 kB' 'Cached: 13251584 kB' 'SwapCached: 0 kB' 'Active: 9264392 kB' 'Inactive: 4523700 kB' 'Active(anon): 8867792 kB' 'Inactive(anon): 0 kB' 'Active(file): 396600 kB' 'Inactive(file): 4523700 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528456 kB' 'Mapped: 203996 kB' 'Shmem: 8342596 kB' 'KReclaimable: 234964 kB' 'Slab: 613580 kB' 'SReclaimable: 234964 kB' 'SUnreclaim: 378616 kB' 'KernelStack: 12800 kB' 'PageTables: 8332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9952672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 14821376 kB' 'DirectMap1G: 52428800 kB' 00:05:04.665 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.665 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.665 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.665 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.665 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.665 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.665 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.665 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.665 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.665 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.665 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.665 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.665 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.665 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.666 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.666 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.666 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.666 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.666 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.666 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.666 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.666 22:34:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.666 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42776420 kB' 'MemAvailable: 47320364 kB' 'Buffers: 11312 kB' 'Cached: 13251588 kB' 'SwapCached: 0 kB' 'Active: 9264300 kB' 'Inactive: 4523700 kB' 'Active(anon): 8867700 kB' 'Inactive(anon): 0 kB' 'Active(file): 396600 kB' 'Inactive(file): 4523700 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528300 kB' 'Mapped: 203988 kB' 'Shmem: 8342600 kB' 'KReclaimable: 234964 kB' 'Slab: 613548 kB' 'SReclaimable: 234964 kB' 'SUnreclaim: 378584 kB' 'KernelStack: 12736 kB' 'PageTables: 8048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9952696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 14821376 kB' 'DirectMap1G: 52428800 kB' 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.667 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.668 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42776852 kB' 'MemAvailable: 47320796 kB' 'Buffers: 11312 kB' 'Cached: 13251604 kB' 'SwapCached: 0 kB' 'Active: 9264108 kB' 'Inactive: 4523700 kB' 'Active(anon): 8867508 kB' 'Inactive(anon): 0 kB' 'Active(file): 396600 kB' 'Inactive(file): 4523700 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528036 kB' 'Mapped: 203988 kB' 'Shmem: 8342616 kB' 'KReclaimable: 234964 kB' 'Slab: 613612 kB' 'SReclaimable: 234964 kB' 'SUnreclaim: 378648 kB' 'KernelStack: 12736 kB' 'PageTables: 8004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9952848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 14821376 kB' 'DirectMap1G: 52428800 kB' 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.669 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.670 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:04.671 nr_hugepages=1024 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:04.671 resv_hugepages=0 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:04.671 surplus_hugepages=0 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:04.671 anon_hugepages=0 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.671 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42779544 kB' 'MemAvailable: 47323488 kB' 'Buffers: 11312 kB' 'Cached: 13251632 kB' 'SwapCached: 0 kB' 'Active: 9264396 kB' 'Inactive: 4523700 kB' 'Active(anon): 8867796 kB' 'Inactive(anon): 0 kB' 'Active(file): 396600 kB' 'Inactive(file): 4523700 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528324 kB' 'Mapped: 203988 kB' 'Shmem: 8342644 kB' 'KReclaimable: 234964 kB' 'Slab: 613612 kB' 'SReclaimable: 234964 kB' 'SUnreclaim: 378648 kB' 'KernelStack: 12768 kB' 'PageTables: 8112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9953240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196548 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 14821376 kB' 'DirectMap1G: 52428800 kB' 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.672 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.673 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21599400 kB' 'MemUsed: 11277540 kB' 'SwapCached: 0 kB' 'Active: 5857588 kB' 'Inactive: 3429848 kB' 'Active(anon): 5585644 kB' 'Inactive(anon): 0 kB' 'Active(file): 271944 kB' 'Inactive(file): 3429848 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9202264 kB' 'Mapped: 89672 kB' 'AnonPages: 88292 kB' 'Shmem: 5500472 kB' 'KernelStack: 6440 kB' 'PageTables: 2948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95108 kB' 'Slab: 301956 kB' 'SReclaimable: 95108 kB' 'SUnreclaim: 206848 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.674 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 21180796 kB' 'MemUsed: 6483976 kB' 'SwapCached: 0 kB' 'Active: 3406660 kB' 'Inactive: 1093852 kB' 'Active(anon): 3282004 kB' 'Inactive(anon): 0 kB' 'Active(file): 124656 kB' 'Inactive(file): 1093852 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4060724 kB' 'Mapped: 114316 kB' 'AnonPages: 439864 kB' 'Shmem: 2842216 kB' 'KernelStack: 6328 kB' 'PageTables: 5164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139856 kB' 'Slab: 311656 kB' 'SReclaimable: 139856 kB' 'SUnreclaim: 171800 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.675 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.676 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.677 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.677 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.677 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.677 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.677 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.677 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.677 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.677 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.677 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.677 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.677 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.677 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:04.677 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:04.677 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:04.677 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:04.677 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:04.677 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:04.677 node0=512 expecting 512 00:05:04.677 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:04.677 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:04.677 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:04.677 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:04.677 node1=512 expecting 512 00:05:04.677 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:04.677 00:05:04.677 real 0m1.405s 00:05:04.677 user 0m0.585s 00:05:04.677 sys 0m0.768s 00:05:04.677 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:04.677 22:34:57 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:04.677 ************************************ 00:05:04.677 END TEST per_node_1G_alloc 00:05:04.677 ************************************ 00:05:04.961 22:34:57 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:04.961 22:34:57 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:04.961 22:34:57 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:04.961 22:34:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:04.961 ************************************ 00:05:04.961 START TEST even_2G_alloc 00:05:04.961 ************************************ 00:05:04.961 22:34:57 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:05:04.961 22:34:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:04.962 22:34:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:04.962 22:34:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:04.962 22:34:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:04.962 22:34:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:04.962 22:34:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:04.962 22:34:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:04.962 22:34:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:04.962 22:34:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:04.962 22:34:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:04.962 22:34:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:04.962 22:34:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:04.962 22:34:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:04.962 22:34:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:04.962 22:34:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:04.962 22:34:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:04.962 22:34:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:05:04.962 22:34:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:04.962 22:34:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:04.962 22:34:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:04.962 22:34:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:04.962 22:34:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:04.962 22:34:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:04.962 22:34:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:04.962 22:34:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:04.962 22:34:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:04.962 22:34:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.962 22:34:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:05.897 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:05.897 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:05.897 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:05.897 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:05.897 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:05.897 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:05.897 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:05.897 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:05.897 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:05.897 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:05.897 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:05.897 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:05.897 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:05.897 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:05.897 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:05.897 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:05.897 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42774260 kB' 'MemAvailable: 47318204 kB' 'Buffers: 11312 kB' 'Cached: 13251728 kB' 'SwapCached: 0 kB' 'Active: 9264376 kB' 'Inactive: 4523700 kB' 'Active(anon): 8867776 kB' 'Inactive(anon): 0 kB' 'Active(file): 396600 kB' 'Inactive(file): 4523700 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528220 kB' 'Mapped: 203968 kB' 'Shmem: 8342740 kB' 'KReclaimable: 234964 kB' 'Slab: 613588 kB' 'SReclaimable: 234964 kB' 'SUnreclaim: 378624 kB' 'KernelStack: 12768 kB' 'PageTables: 8124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9953316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 14821376 kB' 'DirectMap1G: 52428800 kB' 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.161 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.162 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42775308 kB' 'MemAvailable: 47319252 kB' 'Buffers: 11312 kB' 'Cached: 13251732 kB' 'SwapCached: 0 kB' 'Active: 9264828 kB' 'Inactive: 4523700 kB' 'Active(anon): 8868228 kB' 'Inactive(anon): 0 kB' 'Active(file): 396600 kB' 'Inactive(file): 4523700 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528716 kB' 'Mapped: 204020 kB' 'Shmem: 8342744 kB' 'KReclaimable: 234964 kB' 'Slab: 613660 kB' 'SReclaimable: 234964 kB' 'SUnreclaim: 378696 kB' 'KernelStack: 12800 kB' 'PageTables: 8232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9953336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196580 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 14821376 kB' 'DirectMap1G: 52428800 kB' 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.163 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.164 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42775420 kB' 'MemAvailable: 47319364 kB' 'Buffers: 11312 kB' 'Cached: 13251748 kB' 'SwapCached: 0 kB' 'Active: 9264640 kB' 'Inactive: 4523700 kB' 'Active(anon): 8868040 kB' 'Inactive(anon): 0 kB' 'Active(file): 396600 kB' 'Inactive(file): 4523700 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528488 kB' 'Mapped: 204004 kB' 'Shmem: 8342760 kB' 'KReclaimable: 234964 kB' 'Slab: 613664 kB' 'SReclaimable: 234964 kB' 'SUnreclaim: 378700 kB' 'KernelStack: 12784 kB' 'PageTables: 8168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9953356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196580 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 14821376 kB' 'DirectMap1G: 52428800 kB' 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.165 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.166 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:06.167 nr_hugepages=1024 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:06.167 resv_hugepages=0 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:06.167 surplus_hugepages=0 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:06.167 anon_hugepages=0 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42775420 kB' 'MemAvailable: 47319364 kB' 'Buffers: 11312 kB' 'Cached: 13251772 kB' 'SwapCached: 0 kB' 'Active: 9264676 kB' 'Inactive: 4523700 kB' 'Active(anon): 8868076 kB' 'Inactive(anon): 0 kB' 'Active(file): 396600 kB' 'Inactive(file): 4523700 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528488 kB' 'Mapped: 204004 kB' 'Shmem: 8342784 kB' 'KReclaimable: 234964 kB' 'Slab: 613664 kB' 'SReclaimable: 234964 kB' 'SUnreclaim: 378700 kB' 'KernelStack: 12784 kB' 'PageTables: 8168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9953380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196580 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 14821376 kB' 'DirectMap1G: 52428800 kB' 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.167 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.168 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21587988 kB' 'MemUsed: 11288952 kB' 'SwapCached: 0 kB' 'Active: 5858236 kB' 'Inactive: 3429848 kB' 'Active(anon): 5586292 kB' 'Inactive(anon): 0 kB' 'Active(file): 271944 kB' 'Inactive(file): 3429848 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9202268 kB' 'Mapped: 89672 kB' 'AnonPages: 88964 kB' 'Shmem: 5500476 kB' 'KernelStack: 6472 kB' 'PageTables: 3048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95108 kB' 'Slab: 301908 kB' 'SReclaimable: 95108 kB' 'SUnreclaim: 206800 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.169 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.170 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 21188336 kB' 'MemUsed: 6476436 kB' 'SwapCached: 0 kB' 'Active: 3406424 kB' 'Inactive: 1093852 kB' 'Active(anon): 3281768 kB' 'Inactive(anon): 0 kB' 'Active(file): 124656 kB' 'Inactive(file): 1093852 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4060856 kB' 'Mapped: 114332 kB' 'AnonPages: 439516 kB' 'Shmem: 2842348 kB' 'KernelStack: 6312 kB' 'PageTables: 5120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139856 kB' 'Slab: 311756 kB' 'SReclaimable: 139856 kB' 'SUnreclaim: 171900 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.171 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:06.172 node0=512 expecting 512 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:06.172 node1=512 expecting 512 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:06.172 00:05:06.172 real 0m1.439s 00:05:06.172 user 0m0.618s 00:05:06.172 sys 0m0.784s 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:06.172 22:34:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:06.172 ************************************ 00:05:06.172 END TEST even_2G_alloc 00:05:06.172 ************************************ 00:05:06.172 22:34:58 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:06.172 22:34:58 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:06.172 22:34:58 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:06.172 22:34:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:06.431 ************************************ 00:05:06.431 START TEST odd_alloc 00:05:06.431 ************************************ 00:05:06.431 22:34:58 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:05:06.431 22:34:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:06.431 22:34:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:06.431 22:34:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:06.431 22:34:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:06.431 22:34:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:06.431 22:34:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:06.431 22:34:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:06.431 22:34:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:06.431 22:34:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:06.431 22:34:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:06.431 22:34:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:06.431 22:34:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:06.431 22:34:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:06.431 22:34:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:06.431 22:34:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:06.431 22:34:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:06.431 22:34:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:05:06.431 22:34:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:06.431 22:34:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:06.431 22:34:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:05:06.431 22:34:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:06.431 22:34:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:06.431 22:34:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:06.431 22:34:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:06.431 22:34:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:06.431 22:34:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:06.431 22:34:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.431 22:34:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:07.367 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:07.367 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:07.367 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:07.367 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:07.367 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:07.367 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:07.367 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:07.367 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:07.367 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:07.367 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:07.367 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:07.367 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:07.367 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:07.367 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:07.367 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:07.367 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:07.367 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:07.633 22:34:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:07.633 22:34:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:07.633 22:34:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:07.633 22:34:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:07.633 22:34:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:07.633 22:34:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:07.633 22:34:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:07.633 22:34:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:07.633 22:34:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:07.633 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:07.633 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:07.633 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:07.633 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.633 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.633 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.633 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.633 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.633 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.633 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.633 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.633 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42781104 kB' 'MemAvailable: 47325048 kB' 'Buffers: 11312 kB' 'Cached: 13251856 kB' 'SwapCached: 0 kB' 'Active: 9260908 kB' 'Inactive: 4523700 kB' 'Active(anon): 8864308 kB' 'Inactive(anon): 0 kB' 'Active(file): 396600 kB' 'Inactive(file): 4523700 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524632 kB' 'Mapped: 203228 kB' 'Shmem: 8342868 kB' 'KReclaimable: 234964 kB' 'Slab: 613456 kB' 'SReclaimable: 234964 kB' 'SUnreclaim: 378492 kB' 'KernelStack: 12752 kB' 'PageTables: 7872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 9938380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196644 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 14821376 kB' 'DirectMap1G: 52428800 kB' 00:05:07.633 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.633 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.633 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.633 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.633 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.633 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.633 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.633 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.633 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.633 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.633 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.634 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42780724 kB' 'MemAvailable: 47324668 kB' 'Buffers: 11312 kB' 'Cached: 13251860 kB' 'SwapCached: 0 kB' 'Active: 9261692 kB' 'Inactive: 4523700 kB' 'Active(anon): 8865092 kB' 'Inactive(anon): 0 kB' 'Active(file): 396600 kB' 'Inactive(file): 4523700 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525432 kB' 'Mapped: 203244 kB' 'Shmem: 8342872 kB' 'KReclaimable: 234964 kB' 'Slab: 613488 kB' 'SReclaimable: 234964 kB' 'SUnreclaim: 378524 kB' 'KernelStack: 12816 kB' 'PageTables: 8016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 9940776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196612 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 14821376 kB' 'DirectMap1G: 52428800 kB' 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.635 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.636 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42786780 kB' 'MemAvailable: 47330724 kB' 'Buffers: 11312 kB' 'Cached: 13251868 kB' 'SwapCached: 0 kB' 'Active: 9261980 kB' 'Inactive: 4523700 kB' 'Active(anon): 8865380 kB' 'Inactive(anon): 0 kB' 'Active(file): 396600 kB' 'Inactive(file): 4523700 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525696 kB' 'Mapped: 203168 kB' 'Shmem: 8342880 kB' 'KReclaimable: 234964 kB' 'Slab: 613472 kB' 'SReclaimable: 234964 kB' 'SUnreclaim: 378508 kB' 'KernelStack: 12880 kB' 'PageTables: 7892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 9939420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 14821376 kB' 'DirectMap1G: 52428800 kB' 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.637 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.638 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:07.639 nr_hugepages=1025 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:07.639 resv_hugepages=0 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:07.639 surplus_hugepages=0 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:07.639 anon_hugepages=0 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42784588 kB' 'MemAvailable: 47328532 kB' 'Buffers: 11312 kB' 'Cached: 13251868 kB' 'SwapCached: 0 kB' 'Active: 9262384 kB' 'Inactive: 4523700 kB' 'Active(anon): 8865784 kB' 'Inactive(anon): 0 kB' 'Active(file): 396600 kB' 'Inactive(file): 4523700 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526056 kB' 'Mapped: 203168 kB' 'Shmem: 8342880 kB' 'KReclaimable: 234964 kB' 'Slab: 613432 kB' 'SReclaimable: 234964 kB' 'SUnreclaim: 378468 kB' 'KernelStack: 13232 kB' 'PageTables: 8952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 9940808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196820 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 14821376 kB' 'DirectMap1G: 52428800 kB' 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.639 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.640 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.640 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.640 22:34:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:07.640 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21604884 kB' 'MemUsed: 11272056 kB' 'SwapCached: 0 kB' 'Active: 5858512 kB' 'Inactive: 3429848 kB' 'Active(anon): 5586568 kB' 'Inactive(anon): 0 kB' 'Active(file): 271944 kB' 'Inactive(file): 3429848 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9202308 kB' 'Mapped: 89084 kB' 'AnonPages: 88956 kB' 'Shmem: 5500516 kB' 'KernelStack: 6472 kB' 'PageTables: 2908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95108 kB' 'Slab: 301828 kB' 'SReclaimable: 95108 kB' 'SUnreclaim: 206720 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.641 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 21179984 kB' 'MemUsed: 6484788 kB' 'SwapCached: 0 kB' 'Active: 3404456 kB' 'Inactive: 1093852 kB' 'Active(anon): 3279800 kB' 'Inactive(anon): 0 kB' 'Active(file): 124656 kB' 'Inactive(file): 1093852 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4060888 kB' 'Mapped: 114100 kB' 'AnonPages: 437528 kB' 'Shmem: 2842380 kB' 'KernelStack: 6264 kB' 'PageTables: 4752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139856 kB' 'Slab: 311660 kB' 'SReclaimable: 139856 kB' 'SUnreclaim: 171804 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.642 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.643 22:35:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:07.644 22:35:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:07.644 22:35:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:07.644 22:35:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:07.644 22:35:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:07.644 22:35:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:05:07.644 node0=512 expecting 513 00:05:07.644 22:35:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:07.644 22:35:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:07.644 22:35:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:07.644 22:35:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:05:07.644 node1=513 expecting 512 00:05:07.644 22:35:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:05:07.644 00:05:07.644 real 0m1.386s 00:05:07.644 user 0m0.589s 00:05:07.644 sys 0m0.757s 00:05:07.644 22:35:00 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:07.644 22:35:00 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:07.644 ************************************ 00:05:07.644 END TEST odd_alloc 00:05:07.644 ************************************ 00:05:07.644 22:35:00 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:07.644 22:35:00 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:07.644 22:35:00 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:07.644 22:35:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:07.644 ************************************ 00:05:07.644 START TEST custom_alloc 00:05:07.644 ************************************ 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.644 22:35:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:09.029 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:09.029 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:09.029 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:09.029 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:09.029 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:09.029 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:09.029 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:09.029 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:09.029 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:09.029 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:09.029 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:09.029 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:09.029 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:09.029 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:09.029 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:09.029 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:09.029 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:09.029 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 41739604 kB' 'MemAvailable: 46283532 kB' 'Buffers: 11312 kB' 'Cached: 13251984 kB' 'SwapCached: 0 kB' 'Active: 9263432 kB' 'Inactive: 4523700 kB' 'Active(anon): 8866832 kB' 'Inactive(anon): 0 kB' 'Active(file): 396600 kB' 'Inactive(file): 4523700 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527148 kB' 'Mapped: 203192 kB' 'Shmem: 8342996 kB' 'KReclaimable: 234932 kB' 'Slab: 613172 kB' 'SReclaimable: 234932 kB' 'SUnreclaim: 378240 kB' 'KernelStack: 12768 kB' 'PageTables: 7876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 9938276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 14821376 kB' 'DirectMap1G: 52428800 kB' 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 41739712 kB' 'MemAvailable: 46283640 kB' 'Buffers: 11312 kB' 'Cached: 13252000 kB' 'SwapCached: 0 kB' 'Active: 9262880 kB' 'Inactive: 4523700 kB' 'Active(anon): 8866280 kB' 'Inactive(anon): 0 kB' 'Active(file): 396600 kB' 'Inactive(file): 4523700 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526504 kB' 'Mapped: 203180 kB' 'Shmem: 8343012 kB' 'KReclaimable: 234932 kB' 'Slab: 613180 kB' 'SReclaimable: 234932 kB' 'SUnreclaim: 378248 kB' 'KernelStack: 12736 kB' 'PageTables: 7720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 9938664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196516 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 14821376 kB' 'DirectMap1G: 52428800 kB' 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 41740368 kB' 'MemAvailable: 46284296 kB' 'Buffers: 11312 kB' 'Cached: 13252016 kB' 'SwapCached: 0 kB' 'Active: 9262936 kB' 'Inactive: 4523700 kB' 'Active(anon): 8866336 kB' 'Inactive(anon): 0 kB' 'Active(file): 396600 kB' 'Inactive(file): 4523700 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526628 kB' 'Mapped: 203180 kB' 'Shmem: 8343028 kB' 'KReclaimable: 234932 kB' 'Slab: 613184 kB' 'SReclaimable: 234932 kB' 'SUnreclaim: 378252 kB' 'KernelStack: 12736 kB' 'PageTables: 7728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 9938684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196516 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 14821376 kB' 'DirectMap1G: 52428800 kB' 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.033 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.034 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:09.035 nr_hugepages=1536 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:09.035 resv_hugepages=0 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:09.035 surplus_hugepages=0 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:09.035 anon_hugepages=0 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 41740552 kB' 'MemAvailable: 46284480 kB' 'Buffers: 11312 kB' 'Cached: 13252036 kB' 'SwapCached: 0 kB' 'Active: 9262888 kB' 'Inactive: 4523700 kB' 'Active(anon): 8866288 kB' 'Inactive(anon): 0 kB' 'Active(file): 396600 kB' 'Inactive(file): 4523700 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526540 kB' 'Mapped: 203180 kB' 'Shmem: 8343048 kB' 'KReclaimable: 234932 kB' 'Slab: 613184 kB' 'SReclaimable: 234932 kB' 'SUnreclaim: 378252 kB' 'KernelStack: 12736 kB' 'PageTables: 7732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 9938704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196516 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 14821376 kB' 'DirectMap1G: 52428800 kB' 00:05:09.035 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.036 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21609884 kB' 'MemUsed: 11267056 kB' 'SwapCached: 0 kB' 'Active: 5857992 kB' 'Inactive: 3429848 kB' 'Active(anon): 5586048 kB' 'Inactive(anon): 0 kB' 'Active(file): 271944 kB' 'Inactive(file): 3429848 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9202404 kB' 'Mapped: 89068 kB' 'AnonPages: 88664 kB' 'Shmem: 5500612 kB' 'KernelStack: 6456 kB' 'PageTables: 2860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95108 kB' 'Slab: 301652 kB' 'SReclaimable: 95108 kB' 'SUnreclaim: 206544 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.037 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.038 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 20130712 kB' 'MemUsed: 7534060 kB' 'SwapCached: 0 kB' 'Active: 3405100 kB' 'Inactive: 1093852 kB' 'Active(anon): 3280444 kB' 'Inactive(anon): 0 kB' 'Active(file): 124656 kB' 'Inactive(file): 1093852 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4060988 kB' 'Mapped: 114112 kB' 'AnonPages: 438060 kB' 'Shmem: 2842480 kB' 'KernelStack: 6312 kB' 'PageTables: 4968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139824 kB' 'Slab: 311532 kB' 'SReclaimable: 139824 kB' 'SUnreclaim: 171708 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.039 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.040 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.040 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.040 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.040 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.040 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.040 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.040 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.040 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.040 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.040 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.040 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.040 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.040 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.040 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.040 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.040 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.040 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.040 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.040 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.040 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.040 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.040 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.040 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.040 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.040 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.040 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.040 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.040 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.040 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.040 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.040 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.299 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.299 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.299 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.299 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.299 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:09.300 node0=512 expecting 512 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:09.300 node1=1024 expecting 1024 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:09.300 00:05:09.300 real 0m1.429s 00:05:09.300 user 0m0.593s 00:05:09.300 sys 0m0.797s 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:09.300 22:35:01 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:09.300 ************************************ 00:05:09.300 END TEST custom_alloc 00:05:09.300 ************************************ 00:05:09.300 22:35:01 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:09.300 22:35:01 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:09.300 22:35:01 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:09.300 22:35:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:09.300 ************************************ 00:05:09.300 START TEST no_shrink_alloc 00:05:09.300 ************************************ 00:05:09.300 22:35:01 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:05:09.300 22:35:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:09.300 22:35:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:09.300 22:35:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:09.300 22:35:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:09.300 22:35:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:09.300 22:35:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:09.300 22:35:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:09.300 22:35:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:09.300 22:35:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:09.300 22:35:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:09.300 22:35:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:09.300 22:35:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:09.300 22:35:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:09.300 22:35:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:09.300 22:35:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:09.300 22:35:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:09.300 22:35:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:09.300 22:35:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:09.300 22:35:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:09.300 22:35:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:09.300 22:35:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.300 22:35:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:10.235 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:10.235 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:10.235 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:10.235 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:10.235 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:10.235 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:10.235 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:10.235 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:10.235 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:10.235 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:10.235 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:10.235 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:10.235 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:10.235 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:10.235 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:10.235 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:10.235 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42651220 kB' 'MemAvailable: 47195148 kB' 'Buffers: 11312 kB' 'Cached: 13252120 kB' 'SwapCached: 0 kB' 'Active: 9263720 kB' 'Inactive: 4523700 kB' 'Active(anon): 8867120 kB' 'Inactive(anon): 0 kB' 'Active(file): 396600 kB' 'Inactive(file): 4523700 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527124 kB' 'Mapped: 203200 kB' 'Shmem: 8343132 kB' 'KReclaimable: 234932 kB' 'Slab: 613048 kB' 'SReclaimable: 234932 kB' 'SUnreclaim: 378116 kB' 'KernelStack: 12784 kB' 'PageTables: 7844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9938900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 14821376 kB' 'DirectMap1G: 52428800 kB' 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.499 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.500 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42653148 kB' 'MemAvailable: 47197076 kB' 'Buffers: 11312 kB' 'Cached: 13252120 kB' 'SwapCached: 0 kB' 'Active: 9263852 kB' 'Inactive: 4523700 kB' 'Active(anon): 8867252 kB' 'Inactive(anon): 0 kB' 'Active(file): 396600 kB' 'Inactive(file): 4523700 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527272 kB' 'Mapped: 203192 kB' 'Shmem: 8343132 kB' 'KReclaimable: 234932 kB' 'Slab: 613048 kB' 'SReclaimable: 234932 kB' 'SUnreclaim: 378116 kB' 'KernelStack: 12816 kB' 'PageTables: 7912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9938916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196612 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 14821376 kB' 'DirectMap1G: 52428800 kB' 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.501 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.502 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42653496 kB' 'MemAvailable: 47197424 kB' 'Buffers: 11312 kB' 'Cached: 13252140 kB' 'SwapCached: 0 kB' 'Active: 9263072 kB' 'Inactive: 4523700 kB' 'Active(anon): 8866472 kB' 'Inactive(anon): 0 kB' 'Active(file): 396600 kB' 'Inactive(file): 4523700 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526460 kB' 'Mapped: 203192 kB' 'Shmem: 8343152 kB' 'KReclaimable: 234932 kB' 'Slab: 613120 kB' 'SReclaimable: 234932 kB' 'SUnreclaim: 378188 kB' 'KernelStack: 12784 kB' 'PageTables: 7836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9938940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196612 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 14821376 kB' 'DirectMap1G: 52428800 kB' 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.503 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.504 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:10.505 nr_hugepages=1024 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:10.505 resv_hugepages=0 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:10.505 surplus_hugepages=0 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:10.505 anon_hugepages=0 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42649724 kB' 'MemAvailable: 47193652 kB' 'Buffers: 11312 kB' 'Cached: 13252160 kB' 'SwapCached: 0 kB' 'Active: 9266064 kB' 'Inactive: 4523700 kB' 'Active(anon): 8869464 kB' 'Inactive(anon): 0 kB' 'Active(file): 396600 kB' 'Inactive(file): 4523700 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529444 kB' 'Mapped: 203628 kB' 'Shmem: 8343172 kB' 'KReclaimable: 234932 kB' 'Slab: 613120 kB' 'SReclaimable: 234932 kB' 'SUnreclaim: 378188 kB' 'KernelStack: 12784 kB' 'PageTables: 7836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9942428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196580 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 14821376 kB' 'DirectMap1G: 52428800 kB' 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.505 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.506 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20534700 kB' 'MemUsed: 12342240 kB' 'SwapCached: 0 kB' 'Active: 5863160 kB' 'Inactive: 3429848 kB' 'Active(anon): 5591216 kB' 'Inactive(anon): 0 kB' 'Active(file): 271944 kB' 'Inactive(file): 3429848 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9202404 kB' 'Mapped: 89504 kB' 'AnonPages: 93704 kB' 'Shmem: 5500612 kB' 'KernelStack: 6440 kB' 'PageTables: 2808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95108 kB' 'Slab: 301664 kB' 'SReclaimable: 95108 kB' 'SUnreclaim: 206556 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.507 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:10.508 node0=1024 expecting 1024 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.508 22:35:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:11.885 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:11.885 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:11.885 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:11.885 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:11.885 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:11.885 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:11.885 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:11.885 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:11.885 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:11.885 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:11.885 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:11.885 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:11.885 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:11.885 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:11.885 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:11.885 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:11.885 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:11.885 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:11.885 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:11.885 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:11.885 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:11.885 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:11.885 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:11.885 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:11.885 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:11.885 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:11.885 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:11.885 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:11.885 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:11.885 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:11.885 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.885 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.885 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.885 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.885 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.885 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.885 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.885 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42644596 kB' 'MemAvailable: 47188524 kB' 'Buffers: 11312 kB' 'Cached: 13252228 kB' 'SwapCached: 0 kB' 'Active: 9264080 kB' 'Inactive: 4523700 kB' 'Active(anon): 8867480 kB' 'Inactive(anon): 0 kB' 'Active(file): 396600 kB' 'Inactive(file): 4523700 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527404 kB' 'Mapped: 203076 kB' 'Shmem: 8343240 kB' 'KReclaimable: 234932 kB' 'Slab: 612984 kB' 'SReclaimable: 234932 kB' 'SUnreclaim: 378052 kB' 'KernelStack: 12800 kB' 'PageTables: 7856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9939372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196676 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 14821376 kB' 'DirectMap1G: 52428800 kB' 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.886 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42661436 kB' 'MemAvailable: 47205364 kB' 'Buffers: 11312 kB' 'Cached: 13252232 kB' 'SwapCached: 0 kB' 'Active: 9263936 kB' 'Inactive: 4523700 kB' 'Active(anon): 8867336 kB' 'Inactive(anon): 0 kB' 'Active(file): 396600 kB' 'Inactive(file): 4523700 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527348 kB' 'Mapped: 203204 kB' 'Shmem: 8343244 kB' 'KReclaimable: 234932 kB' 'Slab: 612960 kB' 'SReclaimable: 234932 kB' 'SUnreclaim: 378028 kB' 'KernelStack: 12800 kB' 'PageTables: 7840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9939388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 14821376 kB' 'DirectMap1G: 52428800 kB' 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.887 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.888 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42661752 kB' 'MemAvailable: 47205680 kB' 'Buffers: 11312 kB' 'Cached: 13252252 kB' 'SwapCached: 0 kB' 'Active: 9263520 kB' 'Inactive: 4523700 kB' 'Active(anon): 8866920 kB' 'Inactive(anon): 0 kB' 'Active(file): 396600 kB' 'Inactive(file): 4523700 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526896 kB' 'Mapped: 203204 kB' 'Shmem: 8343264 kB' 'KReclaimable: 234932 kB' 'Slab: 613020 kB' 'SReclaimable: 234932 kB' 'SUnreclaim: 378088 kB' 'KernelStack: 12800 kB' 'PageTables: 7792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9939412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 14821376 kB' 'DirectMap1G: 52428800 kB' 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.889 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.890 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:11.891 nr_hugepages=1024 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:11.891 resv_hugepages=0 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:11.891 surplus_hugepages=0 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:11.891 anon_hugepages=0 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42662540 kB' 'MemAvailable: 47206468 kB' 'Buffers: 11312 kB' 'Cached: 13252272 kB' 'SwapCached: 0 kB' 'Active: 9263576 kB' 'Inactive: 4523700 kB' 'Active(anon): 8866976 kB' 'Inactive(anon): 0 kB' 'Active(file): 396600 kB' 'Inactive(file): 4523700 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526936 kB' 'Mapped: 203204 kB' 'Shmem: 8343284 kB' 'KReclaimable: 234932 kB' 'Slab: 613020 kB' 'SReclaimable: 234932 kB' 'SUnreclaim: 378088 kB' 'KernelStack: 12816 kB' 'PageTables: 7840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9939432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 14821376 kB' 'DirectMap1G: 52428800 kB' 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.891 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.892 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20515676 kB' 'MemUsed: 12361264 kB' 'SwapCached: 0 kB' 'Active: 5857428 kB' 'Inactive: 3429848 kB' 'Active(anon): 5585484 kB' 'Inactive(anon): 0 kB' 'Active(file): 271944 kB' 'Inactive(file): 3429848 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9202404 kB' 'Mapped: 89068 kB' 'AnonPages: 88032 kB' 'Shmem: 5500612 kB' 'KernelStack: 6456 kB' 'PageTables: 2812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95108 kB' 'Slab: 301636 kB' 'SReclaimable: 95108 kB' 'SUnreclaim: 206528 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.893 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.894 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.894 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.894 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.894 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.894 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.894 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.894 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.894 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.894 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.894 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.894 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.894 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.894 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.894 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.894 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.894 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.894 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.894 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.894 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.894 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.894 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.894 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.894 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.894 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.894 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.894 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.894 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.894 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.894 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.894 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.894 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.894 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.894 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.894 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.894 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.153 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.153 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.153 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.153 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.153 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.153 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.153 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.153 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.153 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.153 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.153 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.153 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.153 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.153 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.153 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.153 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.153 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.153 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.153 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.153 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.153 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.153 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.153 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.153 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.153 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.153 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.153 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.153 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.153 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.153 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.153 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.153 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.153 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.153 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.153 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.153 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.153 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:12.154 node0=1024 expecting 1024 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:12.154 00:05:12.154 real 0m2.813s 00:05:12.154 user 0m1.185s 00:05:12.154 sys 0m1.527s 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:12.154 22:35:04 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:12.154 ************************************ 00:05:12.154 END TEST no_shrink_alloc 00:05:12.154 ************************************ 00:05:12.154 22:35:04 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:12.154 22:35:04 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:12.154 22:35:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:12.154 22:35:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:12.154 22:35:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:12.154 22:35:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:12.154 22:35:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:12.154 22:35:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:12.154 22:35:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:12.154 22:35:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:12.154 22:35:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:12.154 22:35:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:12.154 22:35:04 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:12.154 22:35:04 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:12.154 00:05:12.154 real 0m11.214s 00:05:12.154 user 0m4.386s 00:05:12.154 sys 0m5.716s 00:05:12.154 22:35:04 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:12.154 22:35:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:12.154 ************************************ 00:05:12.154 END TEST hugepages 00:05:12.154 ************************************ 00:05:12.154 22:35:04 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:12.154 22:35:04 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:12.154 22:35:04 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:12.154 22:35:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:12.154 ************************************ 00:05:12.154 START TEST driver 00:05:12.154 ************************************ 00:05:12.154 22:35:04 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:12.154 * Looking for test storage... 00:05:12.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:12.154 22:35:04 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:12.154 22:35:04 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:12.154 22:35:04 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:14.685 22:35:07 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:14.685 22:35:07 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:14.685 22:35:07 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:14.685 22:35:07 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:14.685 ************************************ 00:05:14.685 START TEST guess_driver 00:05:14.685 ************************************ 00:05:14.685 22:35:07 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:05:14.685 22:35:07 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:14.685 22:35:07 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:14.685 22:35:07 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:14.685 22:35:07 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:14.685 22:35:07 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:14.685 22:35:07 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:14.685 22:35:07 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:14.685 22:35:07 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:14.685 22:35:07 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:14.685 22:35:07 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:05:14.685 22:35:07 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:14.685 22:35:07 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:14.685 22:35:07 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:14.685 22:35:07 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:14.685 22:35:07 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:14.685 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:14.685 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:14.685 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:14.685 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:14.685 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:14.685 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:14.685 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:14.685 22:35:07 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:14.685 22:35:07 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:14.685 22:35:07 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:14.685 22:35:07 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:14.685 22:35:07 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:14.685 Looking for driver=vfio-pci 00:05:14.685 22:35:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:14.685 22:35:07 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:14.685 22:35:07 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.685 22:35:07 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:16.060 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.060 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.060 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.060 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.060 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.060 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.060 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.060 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.060 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.060 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.060 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.060 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.060 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.060 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.060 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.060 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.060 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.060 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.060 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.060 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.060 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.060 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.060 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.060 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.060 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.060 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.060 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.060 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.060 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.060 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.060 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.060 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.060 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.060 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.060 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.061 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.061 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.061 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.061 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.061 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.061 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.061 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.061 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.061 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.061 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.061 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.061 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.061 22:35:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.998 22:35:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.998 22:35:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.998 22:35:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.998 22:35:09 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:16.998 22:35:09 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:16.998 22:35:09 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:16.998 22:35:09 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:19.531 00:05:19.531 real 0m4.756s 00:05:19.531 user 0m1.061s 00:05:19.531 sys 0m1.794s 00:05:19.531 22:35:11 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:19.531 22:35:11 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:19.531 ************************************ 00:05:19.531 END TEST guess_driver 00:05:19.531 ************************************ 00:05:19.531 00:05:19.531 real 0m7.369s 00:05:19.531 user 0m1.670s 00:05:19.531 sys 0m2.820s 00:05:19.531 22:35:11 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:19.531 22:35:11 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:19.531 ************************************ 00:05:19.531 END TEST driver 00:05:19.531 ************************************ 00:05:19.531 22:35:11 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:19.531 22:35:11 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:19.531 22:35:11 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:19.531 22:35:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:19.531 ************************************ 00:05:19.531 START TEST devices 00:05:19.531 ************************************ 00:05:19.531 22:35:11 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:19.531 * Looking for test storage... 00:05:19.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:19.531 22:35:11 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:19.531 22:35:11 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:19.531 22:35:11 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:19.531 22:35:11 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:20.916 22:35:13 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:20.916 22:35:13 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:20.916 22:35:13 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:20.916 22:35:13 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:20.916 22:35:13 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:20.917 22:35:13 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:20.917 22:35:13 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:20.917 22:35:13 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:20.917 22:35:13 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:20.917 22:35:13 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:20.917 22:35:13 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:20.917 22:35:13 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:20.917 22:35:13 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:20.917 22:35:13 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:20.917 22:35:13 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:20.917 22:35:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:20.917 22:35:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:20.917 22:35:13 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:05:20.917 22:35:13 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:05:20.917 22:35:13 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:20.917 22:35:13 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:20.917 22:35:13 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:20.917 No valid GPT data, bailing 00:05:20.917 22:35:13 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:20.917 22:35:13 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:20.917 22:35:13 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:20.917 22:35:13 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:20.917 22:35:13 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:20.917 22:35:13 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:20.917 22:35:13 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:05:20.917 22:35:13 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:20.917 22:35:13 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:20.917 22:35:13 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:05:20.917 22:35:13 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:20.917 22:35:13 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:20.917 22:35:13 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:20.917 22:35:13 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:20.917 22:35:13 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:20.917 22:35:13 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:21.175 ************************************ 00:05:21.175 START TEST nvme_mount 00:05:21.175 ************************************ 00:05:21.176 22:35:13 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:05:21.176 22:35:13 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:21.176 22:35:13 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:21.176 22:35:13 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:21.176 22:35:13 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:21.176 22:35:13 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:21.176 22:35:13 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:21.176 22:35:13 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:21.176 22:35:13 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:21.176 22:35:13 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:21.176 22:35:13 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:21.176 22:35:13 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:21.176 22:35:13 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:21.176 22:35:13 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:21.176 22:35:13 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:21.176 22:35:13 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:21.176 22:35:13 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:21.176 22:35:13 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:21.176 22:35:13 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:21.176 22:35:13 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:22.113 Creating new GPT entries in memory. 00:05:22.113 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:22.113 other utilities. 00:05:22.113 22:35:14 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:22.113 22:35:14 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:22.114 22:35:14 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:22.114 22:35:14 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:22.114 22:35:14 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:23.051 Creating new GPT entries in memory. 00:05:23.051 The operation has completed successfully. 00:05:23.051 22:35:15 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:23.051 22:35:15 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:23.051 22:35:15 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3399265 00:05:23.051 22:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.051 22:35:15 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:23.051 22:35:15 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.051 22:35:15 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:23.051 22:35:15 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:23.051 22:35:15 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.051 22:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:23.051 22:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:23.051 22:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:23.051 22:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.051 22:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:23.051 22:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:23.051 22:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:23.051 22:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:23.051 22:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:23.051 22:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.051 22:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:23.051 22:35:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:23.051 22:35:15 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:23.051 22:35:15 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:24.438 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:24.438 22:35:16 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:24.697 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:24.697 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:24.697 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:24.697 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:24.697 22:35:17 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:24.697 22:35:17 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:24.697 22:35:17 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:24.697 22:35:17 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:24.697 22:35:17 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:24.697 22:35:17 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:24.697 22:35:17 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:24.697 22:35:17 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:24.697 22:35:17 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:24.697 22:35:17 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:24.697 22:35:17 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:24.697 22:35:17 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:24.697 22:35:17 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:24.697 22:35:17 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:24.697 22:35:17 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:24.697 22:35:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.697 22:35:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:24.697 22:35:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:24.697 22:35:17 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.697 22:35:17 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.074 22:35:18 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:27.451 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.451 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:27.451 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:27.451 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.451 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.451 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.451 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.451 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.451 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.451 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.451 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.451 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.451 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.451 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.451 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.451 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.451 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.451 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.451 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.451 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.451 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.452 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.452 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.452 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.452 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.452 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.452 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.452 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.452 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.452 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.452 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.452 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.452 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.452 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.452 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.452 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.452 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:27.452 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:27.452 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:27.452 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:27.452 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:27.452 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:27.452 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:27.452 22:35:19 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:27.452 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:27.452 00:05:27.452 real 0m6.271s 00:05:27.452 user 0m1.550s 00:05:27.452 sys 0m2.298s 00:05:27.452 22:35:19 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:27.452 22:35:19 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:27.452 ************************************ 00:05:27.452 END TEST nvme_mount 00:05:27.452 ************************************ 00:05:27.452 22:35:19 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:27.452 22:35:19 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:27.452 22:35:19 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:27.452 22:35:19 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:27.452 ************************************ 00:05:27.452 START TEST dm_mount 00:05:27.452 ************************************ 00:05:27.452 22:35:19 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:05:27.452 22:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:27.452 22:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:27.452 22:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:27.452 22:35:19 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:27.452 22:35:19 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:27.452 22:35:19 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:27.452 22:35:19 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:27.452 22:35:19 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:27.452 22:35:19 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:27.452 22:35:19 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:27.452 22:35:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:27.452 22:35:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:27.452 22:35:19 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:27.452 22:35:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:27.452 22:35:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:27.452 22:35:19 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:27.452 22:35:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:27.452 22:35:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:27.452 22:35:19 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:27.452 22:35:19 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:27.452 22:35:19 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:28.389 Creating new GPT entries in memory. 00:05:28.389 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:28.389 other utilities. 00:05:28.389 22:35:20 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:28.389 22:35:20 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:28.389 22:35:20 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:28.389 22:35:20 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:28.389 22:35:20 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:29.326 Creating new GPT entries in memory. 00:05:29.326 The operation has completed successfully. 00:05:29.326 22:35:21 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:29.326 22:35:21 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:29.326 22:35:21 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:29.326 22:35:21 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:29.326 22:35:21 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:30.703 The operation has completed successfully. 00:05:30.703 22:35:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:30.703 22:35:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:30.703 22:35:22 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3401651 00:05:30.703 22:35:22 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:30.703 22:35:22 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:30.703 22:35:22 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:30.703 22:35:22 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:30.703 22:35:22 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:30.703 22:35:22 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:30.703 22:35:22 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:30.703 22:35:22 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:30.703 22:35:22 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:30.703 22:35:22 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:30.703 22:35:22 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:30.703 22:35:22 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:30.703 22:35:22 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:30.703 22:35:22 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:30.703 22:35:22 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:30.703 22:35:22 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:30.703 22:35:22 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:30.703 22:35:22 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:30.703 22:35:22 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:30.703 22:35:22 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:30.703 22:35:22 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:30.703 22:35:22 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:30.703 22:35:22 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:30.703 22:35:22 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:30.703 22:35:22 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:30.703 22:35:22 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:30.703 22:35:22 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:30.703 22:35:22 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:30.703 22:35:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.703 22:35:22 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:30.703 22:35:22 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:30.703 22:35:22 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:30.703 22:35:22 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:31.637 22:35:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:31.637 22:35:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:31.637 22:35:23 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:31.637 22:35:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.637 22:35:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:31.637 22:35:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.637 22:35:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:31.637 22:35:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.637 22:35:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:31.637 22:35:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.637 22:35:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:31.637 22:35:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.637 22:35:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:31.637 22:35:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.637 22:35:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:31.637 22:35:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.637 22:35:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:31.637 22:35:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.637 22:35:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:31.637 22:35:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.637 22:35:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:31.637 22:35:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.637 22:35:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:31.637 22:35:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.637 22:35:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:31.637 22:35:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.637 22:35:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:31.637 22:35:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.637 22:35:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:31.637 22:35:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.637 22:35:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:31.637 22:35:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.637 22:35:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:31.637 22:35:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.637 22:35:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:31.637 22:35:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.895 22:35:24 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:31.895 22:35:24 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:31.895 22:35:24 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:31.895 22:35:24 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:31.895 22:35:24 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:31.895 22:35:24 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:31.895 22:35:24 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:31.895 22:35:24 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:31.895 22:35:24 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:31.895 22:35:24 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:31.895 22:35:24 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:31.895 22:35:24 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:31.895 22:35:24 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:31.895 22:35:24 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:31.895 22:35:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.895 22:35:24 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:31.895 22:35:24 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:31.895 22:35:24 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:31.895 22:35:24 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:32.830 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.830 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:32.830 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:32.830 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.830 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.830 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.830 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.830 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.830 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.830 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.830 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.830 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.830 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.830 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.830 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.830 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.830 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.830 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.830 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.830 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.830 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.830 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.830 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.830 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.830 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.830 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.830 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.830 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.830 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.830 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.830 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.830 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.830 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.830 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.830 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.830 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.089 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:33.089 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:33.089 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:33.089 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:33.089 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:33.089 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:33.089 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:33.089 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:33.089 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:33.089 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:33.089 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:33.089 22:35:25 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:33.089 00:05:33.089 real 0m5.662s 00:05:33.089 user 0m0.915s 00:05:33.089 sys 0m1.604s 00:05:33.089 22:35:25 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.089 22:35:25 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:33.089 ************************************ 00:05:33.089 END TEST dm_mount 00:05:33.089 ************************************ 00:05:33.089 22:35:25 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:33.089 22:35:25 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:33.089 22:35:25 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:33.089 22:35:25 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:33.089 22:35:25 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:33.089 22:35:25 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:33.089 22:35:25 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:33.347 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:33.347 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:33.347 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:33.347 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:33.347 22:35:25 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:33.347 22:35:25 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:33.347 22:35:25 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:33.347 22:35:25 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:33.347 22:35:25 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:33.347 22:35:25 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:33.347 22:35:25 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:33.347 00:05:33.347 real 0m13.829s 00:05:33.347 user 0m3.089s 00:05:33.347 sys 0m4.931s 00:05:33.347 22:35:25 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.347 22:35:25 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:33.347 ************************************ 00:05:33.347 END TEST devices 00:05:33.347 ************************************ 00:05:33.347 00:05:33.347 real 0m42.918s 00:05:33.347 user 0m12.377s 00:05:33.347 sys 0m18.760s 00:05:33.347 22:35:25 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.347 22:35:25 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:33.347 ************************************ 00:05:33.347 END TEST setup.sh 00:05:33.347 ************************************ 00:05:33.347 22:35:25 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:34.722 Hugepages 00:05:34.722 node hugesize free / total 00:05:34.722 node0 1048576kB 0 / 0 00:05:34.722 node0 2048kB 2048 / 2048 00:05:34.722 node1 1048576kB 0 / 0 00:05:34.722 node1 2048kB 0 / 0 00:05:34.722 00:05:34.722 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:34.722 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:34.722 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:34.722 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:34.722 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:34.722 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:34.722 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:34.722 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:34.722 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:34.722 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:34.722 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:34.722 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:34.722 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:34.722 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:34.722 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:34.722 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:34.722 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:34.722 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:34.722 22:35:27 -- spdk/autotest.sh@130 -- # uname -s 00:05:34.722 22:35:27 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:34.722 22:35:27 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:34.722 22:35:27 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:36.096 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:36.096 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:36.096 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:36.096 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:36.096 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:36.096 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:36.096 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:36.096 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:36.096 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:36.096 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:36.096 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:36.096 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:36.096 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:36.096 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:36.096 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:36.096 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:37.032 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:37.032 22:35:29 -- common/autotest_common.sh@1528 -- # sleep 1 00:05:37.969 22:35:30 -- common/autotest_common.sh@1529 -- # bdfs=() 00:05:37.969 22:35:30 -- common/autotest_common.sh@1529 -- # local bdfs 00:05:37.969 22:35:30 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:05:37.969 22:35:30 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:05:37.969 22:35:30 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:37.969 22:35:30 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:37.969 22:35:30 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:37.969 22:35:30 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:37.969 22:35:30 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:37.969 22:35:30 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:37.969 22:35:30 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:05:37.969 22:35:30 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:39.399 Waiting for block devices as requested 00:05:39.399 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:39.399 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:39.399 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:39.399 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:39.657 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:39.657 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:39.657 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:39.657 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:39.657 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:39.916 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:39.916 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:39.916 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:40.175 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:40.175 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:40.175 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:40.175 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:40.434 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:40.434 22:35:32 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:40.434 22:35:32 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:40.434 22:35:32 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:05:40.434 22:35:32 -- common/autotest_common.sh@1498 -- # grep 0000:88:00.0/nvme/nvme 00:05:40.434 22:35:32 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:40.434 22:35:32 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:40.434 22:35:32 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:40.434 22:35:32 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:05:40.434 22:35:32 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:05:40.434 22:35:32 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:05:40.434 22:35:32 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:05:40.434 22:35:32 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:40.434 22:35:32 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:40.434 22:35:32 -- common/autotest_common.sh@1541 -- # oacs=' 0xf' 00:05:40.434 22:35:32 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:40.434 22:35:32 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:40.434 22:35:32 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:05:40.434 22:35:32 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:40.434 22:35:32 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:40.434 22:35:32 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:40.434 22:35:32 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:40.434 22:35:32 -- common/autotest_common.sh@1553 -- # continue 00:05:40.434 22:35:32 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:40.434 22:35:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:40.434 22:35:32 -- common/autotest_common.sh@10 -- # set +x 00:05:40.434 22:35:32 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:40.434 22:35:32 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:40.434 22:35:32 -- common/autotest_common.sh@10 -- # set +x 00:05:40.434 22:35:32 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:41.808 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:41.808 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:41.808 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:41.808 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:41.808 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:41.808 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:41.808 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:41.808 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:41.808 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:41.808 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:41.808 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:41.808 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:41.808 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:41.808 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:41.808 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:41.808 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:42.747 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:42.747 22:35:35 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:42.747 22:35:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:42.747 22:35:35 -- common/autotest_common.sh@10 -- # set +x 00:05:42.747 22:35:35 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:42.747 22:35:35 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:42.747 22:35:35 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:42.747 22:35:35 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:42.747 22:35:35 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:42.747 22:35:35 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:42.747 22:35:35 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:42.747 22:35:35 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:42.747 22:35:35 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:42.747 22:35:35 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:42.747 22:35:35 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:42.747 22:35:35 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:42.747 22:35:35 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:05:42.747 22:35:35 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:42.747 22:35:35 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:42.747 22:35:35 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:05:42.747 22:35:35 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:42.747 22:35:35 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:05:42.747 22:35:35 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:88:00.0 00:05:42.747 22:35:35 -- common/autotest_common.sh@1588 -- # [[ -z 0000:88:00.0 ]] 00:05:42.747 22:35:35 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=3406828 00:05:42.747 22:35:35 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:42.747 22:35:35 -- common/autotest_common.sh@1594 -- # waitforlisten 3406828 00:05:42.747 22:35:35 -- common/autotest_common.sh@827 -- # '[' -z 3406828 ']' 00:05:42.747 22:35:35 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.747 22:35:35 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:42.747 22:35:35 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.747 22:35:35 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:42.747 22:35:35 -- common/autotest_common.sh@10 -- # set +x 00:05:43.006 [2024-07-26 22:35:35.269516] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:43.006 [2024-07-26 22:35:35.269615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3406828 ] 00:05:43.006 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.006 [2024-07-26 22:35:35.331831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.006 [2024-07-26 22:35:35.421331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.264 22:35:35 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:43.264 22:35:35 -- common/autotest_common.sh@860 -- # return 0 00:05:43.264 22:35:35 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:05:43.264 22:35:35 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:05:43.264 22:35:35 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:46.548 nvme0n1 00:05:46.548 22:35:38 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:46.548 [2024-07-26 22:35:38.981585] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:46.548 [2024-07-26 22:35:38.981636] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:46.548 request: 00:05:46.548 { 00:05:46.548 "nvme_ctrlr_name": "nvme0", 00:05:46.548 "password": "test", 00:05:46.548 "method": "bdev_nvme_opal_revert", 00:05:46.548 "req_id": 1 00:05:46.548 } 00:05:46.548 Got JSON-RPC error response 00:05:46.548 response: 00:05:46.548 { 00:05:46.548 "code": -32603, 00:05:46.548 "message": "Internal error" 00:05:46.548 } 00:05:46.548 22:35:38 -- common/autotest_common.sh@1600 -- # true 00:05:46.548 22:35:38 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:05:46.548 22:35:38 -- common/autotest_common.sh@1604 -- # killprocess 3406828 00:05:46.548 22:35:38 -- common/autotest_common.sh@946 -- # '[' -z 3406828 ']' 00:05:46.548 22:35:38 -- common/autotest_common.sh@950 -- # kill -0 3406828 00:05:46.548 22:35:39 -- common/autotest_common.sh@951 -- # uname 00:05:46.548 22:35:39 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:46.548 22:35:39 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3406828 00:05:46.548 22:35:39 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:46.548 22:35:39 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:46.548 22:35:39 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3406828' 00:05:46.548 killing process with pid 3406828 00:05:46.548 22:35:39 -- common/autotest_common.sh@965 -- # kill 3406828 00:05:46.548 22:35:39 -- common/autotest_common.sh@970 -- # wait 3406828 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.807 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.808 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.710 22:35:40 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:48.710 22:35:40 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:48.710 22:35:40 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:48.710 22:35:40 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:48.710 22:35:40 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:48.710 22:35:40 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:48.710 22:35:40 -- common/autotest_common.sh@10 -- # set +x 00:05:48.710 22:35:40 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:48.710 22:35:40 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:48.710 22:35:40 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:48.710 22:35:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.710 22:35:40 -- common/autotest_common.sh@10 -- # set +x 00:05:48.710 ************************************ 00:05:48.710 START TEST env 00:05:48.710 ************************************ 00:05:48.710 22:35:40 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:48.710 * Looking for test storage... 00:05:48.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:48.710 22:35:40 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:48.710 22:35:40 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:48.710 22:35:40 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.710 22:35:40 env -- common/autotest_common.sh@10 -- # set +x 00:05:48.710 ************************************ 00:05:48.710 START TEST env_memory 00:05:48.710 ************************************ 00:05:48.710 22:35:40 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:48.710 00:05:48.710 00:05:48.710 CUnit - A unit testing framework for C - Version 2.1-3 00:05:48.710 http://cunit.sourceforge.net/ 00:05:48.710 00:05:48.710 00:05:48.710 Suite: memory 00:05:48.710 Test: alloc and free memory map ...[2024-07-26 22:35:40.947168] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:48.710 passed 00:05:48.710 Test: mem map translation ...[2024-07-26 22:35:40.967113] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:48.710 [2024-07-26 22:35:40.967134] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:48.710 [2024-07-26 22:35:40.967184] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:48.710 [2024-07-26 22:35:40.967196] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:48.710 passed 00:05:48.710 Test: mem map registration ...[2024-07-26 22:35:41.007614] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:48.710 [2024-07-26 22:35:41.007633] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:48.710 passed 00:05:48.710 Test: mem map adjacent registrations ...passed 00:05:48.710 00:05:48.710 Run Summary: Type Total Ran Passed Failed Inactive 00:05:48.710 suites 1 1 n/a 0 0 00:05:48.710 tests 4 4 4 0 0 00:05:48.710 asserts 152 152 152 0 n/a 00:05:48.710 00:05:48.710 Elapsed time = 0.140 seconds 00:05:48.710 00:05:48.710 real 0m0.147s 00:05:48.710 user 0m0.139s 00:05:48.710 sys 0m0.008s 00:05:48.710 22:35:41 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:48.710 22:35:41 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:48.710 ************************************ 00:05:48.710 END TEST env_memory 00:05:48.710 ************************************ 00:05:48.710 22:35:41 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:48.710 22:35:41 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:48.710 22:35:41 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.710 22:35:41 env -- common/autotest_common.sh@10 -- # set +x 00:05:48.710 ************************************ 00:05:48.710 START TEST env_vtophys 00:05:48.710 ************************************ 00:05:48.710 22:35:41 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:48.710 EAL: lib.eal log level changed from notice to debug 00:05:48.710 EAL: Detected lcore 0 as core 0 on socket 0 00:05:48.710 EAL: Detected lcore 1 as core 1 on socket 0 00:05:48.710 EAL: Detected lcore 2 as core 2 on socket 0 00:05:48.710 EAL: Detected lcore 3 as core 3 on socket 0 00:05:48.710 EAL: Detected lcore 4 as core 4 on socket 0 00:05:48.710 EAL: Detected lcore 5 as core 5 on socket 0 00:05:48.710 EAL: Detected lcore 6 as core 8 on socket 0 00:05:48.710 EAL: Detected lcore 7 as core 9 on socket 0 00:05:48.710 EAL: Detected lcore 8 as core 10 on socket 0 00:05:48.710 EAL: Detected lcore 9 as core 11 on socket 0 00:05:48.710 EAL: Detected lcore 10 as core 12 on socket 0 00:05:48.710 EAL: Detected lcore 11 as core 13 on socket 0 00:05:48.710 EAL: Detected lcore 12 as core 0 on socket 1 00:05:48.710 EAL: Detected lcore 13 as core 1 on socket 1 00:05:48.710 EAL: Detected lcore 14 as core 2 on socket 1 00:05:48.710 EAL: Detected lcore 15 as core 3 on socket 1 00:05:48.710 EAL: Detected lcore 16 as core 4 on socket 1 00:05:48.710 EAL: Detected lcore 17 as core 5 on socket 1 00:05:48.710 EAL: Detected lcore 18 as core 8 on socket 1 00:05:48.710 EAL: Detected lcore 19 as core 9 on socket 1 00:05:48.710 EAL: Detected lcore 20 as core 10 on socket 1 00:05:48.710 EAL: Detected lcore 21 as core 11 on socket 1 00:05:48.710 EAL: Detected lcore 22 as core 12 on socket 1 00:05:48.710 EAL: Detected lcore 23 as core 13 on socket 1 00:05:48.710 EAL: Detected lcore 24 as core 0 on socket 0 00:05:48.710 EAL: Detected lcore 25 as core 1 on socket 0 00:05:48.710 EAL: Detected lcore 26 as core 2 on socket 0 00:05:48.710 EAL: Detected lcore 27 as core 3 on socket 0 00:05:48.710 EAL: Detected lcore 28 as core 4 on socket 0 00:05:48.710 EAL: Detected lcore 29 as core 5 on socket 0 00:05:48.710 EAL: Detected lcore 30 as core 8 on socket 0 00:05:48.710 EAL: Detected lcore 31 as core 9 on socket 0 00:05:48.710 EAL: Detected lcore 32 as core 10 on socket 0 00:05:48.710 EAL: Detected lcore 33 as core 11 on socket 0 00:05:48.710 EAL: Detected lcore 34 as core 12 on socket 0 00:05:48.710 EAL: Detected lcore 35 as core 13 on socket 0 00:05:48.710 EAL: Detected lcore 36 as core 0 on socket 1 00:05:48.710 EAL: Detected lcore 37 as core 1 on socket 1 00:05:48.710 EAL: Detected lcore 38 as core 2 on socket 1 00:05:48.710 EAL: Detected lcore 39 as core 3 on socket 1 00:05:48.710 EAL: Detected lcore 40 as core 4 on socket 1 00:05:48.710 EAL: Detected lcore 41 as core 5 on socket 1 00:05:48.710 EAL: Detected lcore 42 as core 8 on socket 1 00:05:48.710 EAL: Detected lcore 43 as core 9 on socket 1 00:05:48.711 EAL: Detected lcore 44 as core 10 on socket 1 00:05:48.711 EAL: Detected lcore 45 as core 11 on socket 1 00:05:48.711 EAL: Detected lcore 46 as core 12 on socket 1 00:05:48.711 EAL: Detected lcore 47 as core 13 on socket 1 00:05:48.711 EAL: Maximum logical cores by configuration: 128 00:05:48.711 EAL: Detected CPU lcores: 48 00:05:48.711 EAL: Detected NUMA nodes: 2 00:05:48.711 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:48.711 EAL: Detected shared linkage of DPDK 00:05:48.711 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:48.711 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:48.711 EAL: Registered [vdev] bus. 00:05:48.711 EAL: bus.vdev log level changed from disabled to notice 00:05:48.711 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:48.711 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:48.711 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:48.711 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:48.711 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:48.711 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:48.711 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:48.711 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:48.711 EAL: No shared files mode enabled, IPC will be disabled 00:05:48.711 EAL: No shared files mode enabled, IPC is disabled 00:05:48.711 EAL: Bus pci wants IOVA as 'DC' 00:05:48.711 EAL: Bus vdev wants IOVA as 'DC' 00:05:48.711 EAL: Buses did not request a specific IOVA mode. 00:05:48.711 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:48.711 EAL: Selected IOVA mode 'VA' 00:05:48.711 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.711 EAL: Probing VFIO support... 00:05:48.711 EAL: IOMMU type 1 (Type 1) is supported 00:05:48.711 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:48.711 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:48.711 EAL: VFIO support initialized 00:05:48.711 EAL: Ask a virtual area of 0x2e000 bytes 00:05:48.711 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:48.711 EAL: Setting up physically contiguous memory... 00:05:48.711 EAL: Setting maximum number of open files to 524288 00:05:48.711 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:48.711 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:48.711 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:48.711 EAL: Ask a virtual area of 0x61000 bytes 00:05:48.711 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:48.711 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:48.711 EAL: Ask a virtual area of 0x400000000 bytes 00:05:48.711 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:48.711 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:48.711 EAL: Ask a virtual area of 0x61000 bytes 00:05:48.711 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:48.711 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:48.711 EAL: Ask a virtual area of 0x400000000 bytes 00:05:48.711 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:48.711 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:48.711 EAL: Ask a virtual area of 0x61000 bytes 00:05:48.711 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:48.711 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:48.711 EAL: Ask a virtual area of 0x400000000 bytes 00:05:48.711 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:48.711 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:48.711 EAL: Ask a virtual area of 0x61000 bytes 00:05:48.711 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:48.711 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:48.711 EAL: Ask a virtual area of 0x400000000 bytes 00:05:48.711 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:48.711 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:48.711 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:48.711 EAL: Ask a virtual area of 0x61000 bytes 00:05:48.711 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:48.711 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:48.711 EAL: Ask a virtual area of 0x400000000 bytes 00:05:48.711 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:48.711 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:48.711 EAL: Ask a virtual area of 0x61000 bytes 00:05:48.711 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:48.711 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:48.711 EAL: Ask a virtual area of 0x400000000 bytes 00:05:48.711 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:48.711 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:48.711 EAL: Ask a virtual area of 0x61000 bytes 00:05:48.711 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:48.711 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:48.711 EAL: Ask a virtual area of 0x400000000 bytes 00:05:48.711 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:48.711 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:48.711 EAL: Ask a virtual area of 0x61000 bytes 00:05:48.711 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:48.711 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:48.711 EAL: Ask a virtual area of 0x400000000 bytes 00:05:48.711 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:48.711 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:48.711 EAL: Hugepages will be freed exactly as allocated. 00:05:48.711 EAL: No shared files mode enabled, IPC is disabled 00:05:48.711 EAL: No shared files mode enabled, IPC is disabled 00:05:48.711 EAL: TSC frequency is ~2700000 KHz 00:05:48.711 EAL: Main lcore 0 is ready (tid=7ff6abdd9a00;cpuset=[0]) 00:05:48.711 EAL: Trying to obtain current memory policy. 00:05:48.711 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.711 EAL: Restoring previous memory policy: 0 00:05:48.711 EAL: request: mp_malloc_sync 00:05:48.711 EAL: No shared files mode enabled, IPC is disabled 00:05:48.711 EAL: Heap on socket 0 was expanded by 2MB 00:05:48.711 EAL: No shared files mode enabled, IPC is disabled 00:05:48.711 EAL: No shared files mode enabled, IPC is disabled 00:05:48.711 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:48.711 EAL: Mem event callback 'spdk:(nil)' registered 00:05:48.711 00:05:48.711 00:05:48.711 CUnit - A unit testing framework for C - Version 2.1-3 00:05:48.711 http://cunit.sourceforge.net/ 00:05:48.711 00:05:48.711 00:05:48.711 Suite: components_suite 00:05:48.711 Test: vtophys_malloc_test ...passed 00:05:48.711 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:48.711 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.711 EAL: Restoring previous memory policy: 4 00:05:48.711 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.711 EAL: request: mp_malloc_sync 00:05:48.711 EAL: No shared files mode enabled, IPC is disabled 00:05:48.711 EAL: Heap on socket 0 was expanded by 4MB 00:05:48.711 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.711 EAL: request: mp_malloc_sync 00:05:48.711 EAL: No shared files mode enabled, IPC is disabled 00:05:48.711 EAL: Heap on socket 0 was shrunk by 4MB 00:05:48.711 EAL: Trying to obtain current memory policy. 00:05:48.711 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.711 EAL: Restoring previous memory policy: 4 00:05:48.711 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.711 EAL: request: mp_malloc_sync 00:05:48.711 EAL: No shared files mode enabled, IPC is disabled 00:05:48.711 EAL: Heap on socket 0 was expanded by 6MB 00:05:48.711 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.711 EAL: request: mp_malloc_sync 00:05:48.711 EAL: No shared files mode enabled, IPC is disabled 00:05:48.711 EAL: Heap on socket 0 was shrunk by 6MB 00:05:48.711 EAL: Trying to obtain current memory policy. 00:05:48.711 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.711 EAL: Restoring previous memory policy: 4 00:05:48.711 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.711 EAL: request: mp_malloc_sync 00:05:48.711 EAL: No shared files mode enabled, IPC is disabled 00:05:48.711 EAL: Heap on socket 0 was expanded by 10MB 00:05:48.711 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.711 EAL: request: mp_malloc_sync 00:05:48.711 EAL: No shared files mode enabled, IPC is disabled 00:05:48.711 EAL: Heap on socket 0 was shrunk by 10MB 00:05:48.711 EAL: Trying to obtain current memory policy. 00:05:48.711 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.711 EAL: Restoring previous memory policy: 4 00:05:48.711 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.712 EAL: request: mp_malloc_sync 00:05:48.712 EAL: No shared files mode enabled, IPC is disabled 00:05:48.712 EAL: Heap on socket 0 was expanded by 18MB 00:05:48.712 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.712 EAL: request: mp_malloc_sync 00:05:48.712 EAL: No shared files mode enabled, IPC is disabled 00:05:48.712 EAL: Heap on socket 0 was shrunk by 18MB 00:05:48.712 EAL: Trying to obtain current memory policy. 00:05:48.712 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.712 EAL: Restoring previous memory policy: 4 00:05:48.712 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.712 EAL: request: mp_malloc_sync 00:05:48.712 EAL: No shared files mode enabled, IPC is disabled 00:05:48.712 EAL: Heap on socket 0 was expanded by 34MB 00:05:48.712 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.712 EAL: request: mp_malloc_sync 00:05:48.712 EAL: No shared files mode enabled, IPC is disabled 00:05:48.712 EAL: Heap on socket 0 was shrunk by 34MB 00:05:48.712 EAL: Trying to obtain current memory policy. 00:05:48.712 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.970 EAL: Restoring previous memory policy: 4 00:05:48.970 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.970 EAL: request: mp_malloc_sync 00:05:48.970 EAL: No shared files mode enabled, IPC is disabled 00:05:48.970 EAL: Heap on socket 0 was expanded by 66MB 00:05:48.970 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.970 EAL: request: mp_malloc_sync 00:05:48.970 EAL: No shared files mode enabled, IPC is disabled 00:05:48.970 EAL: Heap on socket 0 was shrunk by 66MB 00:05:48.970 EAL: Trying to obtain current memory policy. 00:05:48.970 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.970 EAL: Restoring previous memory policy: 4 00:05:48.970 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.970 EAL: request: mp_malloc_sync 00:05:48.970 EAL: No shared files mode enabled, IPC is disabled 00:05:48.970 EAL: Heap on socket 0 was expanded by 130MB 00:05:48.970 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.970 EAL: request: mp_malloc_sync 00:05:48.970 EAL: No shared files mode enabled, IPC is disabled 00:05:48.970 EAL: Heap on socket 0 was shrunk by 130MB 00:05:48.970 EAL: Trying to obtain current memory policy. 00:05:48.970 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.970 EAL: Restoring previous memory policy: 4 00:05:48.970 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.970 EAL: request: mp_malloc_sync 00:05:48.970 EAL: No shared files mode enabled, IPC is disabled 00:05:48.970 EAL: Heap on socket 0 was expanded by 258MB 00:05:48.970 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.228 EAL: request: mp_malloc_sync 00:05:49.228 EAL: No shared files mode enabled, IPC is disabled 00:05:49.228 EAL: Heap on socket 0 was shrunk by 258MB 00:05:49.228 EAL: Trying to obtain current memory policy. 00:05:49.228 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.228 EAL: Restoring previous memory policy: 4 00:05:49.228 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.228 EAL: request: mp_malloc_sync 00:05:49.228 EAL: No shared files mode enabled, IPC is disabled 00:05:49.228 EAL: Heap on socket 0 was expanded by 514MB 00:05:49.486 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.486 EAL: request: mp_malloc_sync 00:05:49.486 EAL: No shared files mode enabled, IPC is disabled 00:05:49.486 EAL: Heap on socket 0 was shrunk by 514MB 00:05:49.486 EAL: Trying to obtain current memory policy. 00:05:49.486 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.745 EAL: Restoring previous memory policy: 4 00:05:49.745 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.745 EAL: request: mp_malloc_sync 00:05:49.745 EAL: No shared files mode enabled, IPC is disabled 00:05:49.745 EAL: Heap on socket 0 was expanded by 1026MB 00:05:50.003 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.261 EAL: request: mp_malloc_sync 00:05:50.261 EAL: No shared files mode enabled, IPC is disabled 00:05:50.261 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:50.261 passed 00:05:50.261 00:05:50.261 Run Summary: Type Total Ran Passed Failed Inactive 00:05:50.261 suites 1 1 n/a 0 0 00:05:50.261 tests 2 2 2 0 0 00:05:50.261 asserts 497 497 497 0 n/a 00:05:50.261 00:05:50.261 Elapsed time = 1.400 seconds 00:05:50.261 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.261 EAL: request: mp_malloc_sync 00:05:50.261 EAL: No shared files mode enabled, IPC is disabled 00:05:50.261 EAL: Heap on socket 0 was shrunk by 2MB 00:05:50.261 EAL: No shared files mode enabled, IPC is disabled 00:05:50.261 EAL: No shared files mode enabled, IPC is disabled 00:05:50.261 EAL: No shared files mode enabled, IPC is disabled 00:05:50.261 00:05:50.261 real 0m1.511s 00:05:50.261 user 0m0.886s 00:05:50.261 sys 0m0.592s 00:05:50.261 22:35:42 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:50.261 22:35:42 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:50.261 ************************************ 00:05:50.261 END TEST env_vtophys 00:05:50.261 ************************************ 00:05:50.261 22:35:42 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:50.261 22:35:42 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:50.261 22:35:42 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.261 22:35:42 env -- common/autotest_common.sh@10 -- # set +x 00:05:50.261 ************************************ 00:05:50.261 START TEST env_pci 00:05:50.261 ************************************ 00:05:50.261 22:35:42 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:50.261 00:05:50.261 00:05:50.262 CUnit - A unit testing framework for C - Version 2.1-3 00:05:50.262 http://cunit.sourceforge.net/ 00:05:50.262 00:05:50.262 00:05:50.262 Suite: pci 00:05:50.262 Test: pci_hook ...[2024-07-26 22:35:42.675213] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3407713 has claimed it 00:05:50.262 EAL: Cannot find device (10000:00:01.0) 00:05:50.262 EAL: Failed to attach device on primary process 00:05:50.262 passed 00:05:50.262 00:05:50.262 Run Summary: Type Total Ran Passed Failed Inactive 00:05:50.262 suites 1 1 n/a 0 0 00:05:50.262 tests 1 1 1 0 0 00:05:50.262 asserts 25 25 25 0 n/a 00:05:50.262 00:05:50.262 Elapsed time = 0.021 seconds 00:05:50.262 00:05:50.262 real 0m0.033s 00:05:50.262 user 0m0.010s 00:05:50.262 sys 0m0.024s 00:05:50.262 22:35:42 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:50.262 22:35:42 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:50.262 ************************************ 00:05:50.262 END TEST env_pci 00:05:50.262 ************************************ 00:05:50.262 22:35:42 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:50.262 22:35:42 env -- env/env.sh@15 -- # uname 00:05:50.262 22:35:42 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:50.262 22:35:42 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:50.262 22:35:42 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:50.262 22:35:42 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:50.262 22:35:42 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.262 22:35:42 env -- common/autotest_common.sh@10 -- # set +x 00:05:50.262 ************************************ 00:05:50.262 START TEST env_dpdk_post_init 00:05:50.262 ************************************ 00:05:50.262 22:35:42 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:50.262 EAL: Detected CPU lcores: 48 00:05:50.262 EAL: Detected NUMA nodes: 2 00:05:50.262 EAL: Detected shared linkage of DPDK 00:05:50.520 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:50.520 EAL: Selected IOVA mode 'VA' 00:05:50.520 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.520 EAL: VFIO support initialized 00:05:50.520 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:50.520 EAL: Using IOMMU type 1 (Type 1) 00:05:50.520 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:50.520 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:50.520 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:50.520 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:50.520 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:50.520 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:50.520 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:50.520 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:50.520 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:50.520 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:50.520 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:50.520 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:50.520 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:50.520 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:50.520 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:50.780 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:51.345 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:54.624 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:54.625 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:54.625 Starting DPDK initialization... 00:05:54.625 Starting SPDK post initialization... 00:05:54.625 SPDK NVMe probe 00:05:54.625 Attaching to 0000:88:00.0 00:05:54.625 Attached to 0000:88:00.0 00:05:54.625 Cleaning up... 00:05:54.625 00:05:54.625 real 0m4.374s 00:05:54.625 user 0m3.242s 00:05:54.625 sys 0m0.193s 00:05:54.625 22:35:47 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:54.625 22:35:47 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:54.625 ************************************ 00:05:54.625 END TEST env_dpdk_post_init 00:05:54.625 ************************************ 00:05:54.883 22:35:47 env -- env/env.sh@26 -- # uname 00:05:54.883 22:35:47 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:54.883 22:35:47 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:54.883 22:35:47 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:54.883 22:35:47 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:54.883 22:35:47 env -- common/autotest_common.sh@10 -- # set +x 00:05:54.883 ************************************ 00:05:54.883 START TEST env_mem_callbacks 00:05:54.883 ************************************ 00:05:54.883 22:35:47 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:54.883 EAL: Detected CPU lcores: 48 00:05:54.883 EAL: Detected NUMA nodes: 2 00:05:54.883 EAL: Detected shared linkage of DPDK 00:05:54.883 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:54.883 EAL: Selected IOVA mode 'VA' 00:05:54.883 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.883 EAL: VFIO support initialized 00:05:54.883 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:54.883 00:05:54.883 00:05:54.883 CUnit - A unit testing framework for C - Version 2.1-3 00:05:54.883 http://cunit.sourceforge.net/ 00:05:54.883 00:05:54.883 00:05:54.883 Suite: memory 00:05:54.883 Test: test ... 00:05:54.883 register 0x200000200000 2097152 00:05:54.883 malloc 3145728 00:05:54.883 register 0x200000400000 4194304 00:05:54.883 buf 0x200000500000 len 3145728 PASSED 00:05:54.883 malloc 64 00:05:54.883 buf 0x2000004fff40 len 64 PASSED 00:05:54.883 malloc 4194304 00:05:54.883 register 0x200000800000 6291456 00:05:54.883 buf 0x200000a00000 len 4194304 PASSED 00:05:54.883 free 0x200000500000 3145728 00:05:54.883 free 0x2000004fff40 64 00:05:54.883 unregister 0x200000400000 4194304 PASSED 00:05:54.883 free 0x200000a00000 4194304 00:05:54.883 unregister 0x200000800000 6291456 PASSED 00:05:54.883 malloc 8388608 00:05:54.883 register 0x200000400000 10485760 00:05:54.883 buf 0x200000600000 len 8388608 PASSED 00:05:54.883 free 0x200000600000 8388608 00:05:54.883 unregister 0x200000400000 10485760 PASSED 00:05:54.883 passed 00:05:54.883 00:05:54.883 Run Summary: Type Total Ran Passed Failed Inactive 00:05:54.883 suites 1 1 n/a 0 0 00:05:54.883 tests 1 1 1 0 0 00:05:54.884 asserts 15 15 15 0 n/a 00:05:54.884 00:05:54.884 Elapsed time = 0.005 seconds 00:05:54.884 00:05:54.884 real 0m0.048s 00:05:54.884 user 0m0.016s 00:05:54.884 sys 0m0.031s 00:05:54.884 22:35:47 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:54.884 22:35:47 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:54.884 ************************************ 00:05:54.884 END TEST env_mem_callbacks 00:05:54.884 ************************************ 00:05:54.884 00:05:54.884 real 0m6.395s 00:05:54.884 user 0m4.410s 00:05:54.884 sys 0m1.030s 00:05:54.884 22:35:47 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:54.884 22:35:47 env -- common/autotest_common.sh@10 -- # set +x 00:05:54.884 ************************************ 00:05:54.884 END TEST env 00:05:54.884 ************************************ 00:05:54.884 22:35:47 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:54.884 22:35:47 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:54.884 22:35:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:54.884 22:35:47 -- common/autotest_common.sh@10 -- # set +x 00:05:54.884 ************************************ 00:05:54.884 START TEST rpc 00:05:54.884 ************************************ 00:05:54.884 22:35:47 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:54.884 * Looking for test storage... 00:05:54.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:54.884 22:35:47 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3408373 00:05:54.884 22:35:47 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:54.884 22:35:47 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.884 22:35:47 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3408373 00:05:54.884 22:35:47 rpc -- common/autotest_common.sh@827 -- # '[' -z 3408373 ']' 00:05:54.884 22:35:47 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.884 22:35:47 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:54.884 22:35:47 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.884 22:35:47 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:54.884 22:35:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.884 [2024-07-26 22:35:47.386317] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:54.884 [2024-07-26 22:35:47.386421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3408373 ] 00:05:55.143 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.143 [2024-07-26 22:35:47.443409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.143 [2024-07-26 22:35:47.530980] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:55.143 [2024-07-26 22:35:47.531050] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3408373' to capture a snapshot of events at runtime. 00:05:55.143 [2024-07-26 22:35:47.531098] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:55.143 [2024-07-26 22:35:47.531116] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:55.143 [2024-07-26 22:35:47.531127] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3408373 for offline analysis/debug. 00:05:55.143 [2024-07-26 22:35:47.531155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.402 22:35:47 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:55.402 22:35:47 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:55.402 22:35:47 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:55.402 22:35:47 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:55.402 22:35:47 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:55.402 22:35:47 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:55.402 22:35:47 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:55.402 22:35:47 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:55.402 22:35:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.402 ************************************ 00:05:55.402 START TEST rpc_integrity 00:05:55.402 ************************************ 00:05:55.402 22:35:47 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:55.402 22:35:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:55.402 22:35:47 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.402 22:35:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.402 22:35:47 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.402 22:35:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:55.402 22:35:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:55.402 22:35:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:55.402 22:35:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:55.402 22:35:47 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.402 22:35:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.402 22:35:47 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.402 22:35:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:55.402 22:35:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:55.402 22:35:47 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.402 22:35:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.402 22:35:47 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.402 22:35:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:55.402 { 00:05:55.402 "name": "Malloc0", 00:05:55.402 "aliases": [ 00:05:55.402 "d12bb468-2bbe-471a-8b32-3a57904c9865" 00:05:55.402 ], 00:05:55.402 "product_name": "Malloc disk", 00:05:55.402 "block_size": 512, 00:05:55.402 "num_blocks": 16384, 00:05:55.402 "uuid": "d12bb468-2bbe-471a-8b32-3a57904c9865", 00:05:55.402 "assigned_rate_limits": { 00:05:55.402 "rw_ios_per_sec": 0, 00:05:55.402 "rw_mbytes_per_sec": 0, 00:05:55.402 "r_mbytes_per_sec": 0, 00:05:55.402 "w_mbytes_per_sec": 0 00:05:55.402 }, 00:05:55.402 "claimed": false, 00:05:55.402 "zoned": false, 00:05:55.402 "supported_io_types": { 00:05:55.402 "read": true, 00:05:55.402 "write": true, 00:05:55.402 "unmap": true, 00:05:55.402 "write_zeroes": true, 00:05:55.402 "flush": true, 00:05:55.402 "reset": true, 00:05:55.402 "compare": false, 00:05:55.402 "compare_and_write": false, 00:05:55.402 "abort": true, 00:05:55.402 "nvme_admin": false, 00:05:55.402 "nvme_io": false 00:05:55.402 }, 00:05:55.402 "memory_domains": [ 00:05:55.402 { 00:05:55.402 "dma_device_id": "system", 00:05:55.402 "dma_device_type": 1 00:05:55.402 }, 00:05:55.402 { 00:05:55.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.402 "dma_device_type": 2 00:05:55.402 } 00:05:55.402 ], 00:05:55.402 "driver_specific": {} 00:05:55.402 } 00:05:55.402 ]' 00:05:55.402 22:35:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:55.660 22:35:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:55.660 22:35:47 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:55.660 22:35:47 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.660 22:35:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.660 [2024-07-26 22:35:47.923264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:55.660 [2024-07-26 22:35:47.923308] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:55.660 [2024-07-26 22:35:47.923331] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e3d8f0 00:05:55.660 [2024-07-26 22:35:47.923368] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:55.660 [2024-07-26 22:35:47.924812] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:55.660 [2024-07-26 22:35:47.924841] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:55.660 Passthru0 00:05:55.660 22:35:47 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.660 22:35:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:55.660 22:35:47 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.660 22:35:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.660 22:35:47 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.660 22:35:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:55.660 { 00:05:55.660 "name": "Malloc0", 00:05:55.660 "aliases": [ 00:05:55.660 "d12bb468-2bbe-471a-8b32-3a57904c9865" 00:05:55.661 ], 00:05:55.661 "product_name": "Malloc disk", 00:05:55.661 "block_size": 512, 00:05:55.661 "num_blocks": 16384, 00:05:55.661 "uuid": "d12bb468-2bbe-471a-8b32-3a57904c9865", 00:05:55.661 "assigned_rate_limits": { 00:05:55.661 "rw_ios_per_sec": 0, 00:05:55.661 "rw_mbytes_per_sec": 0, 00:05:55.661 "r_mbytes_per_sec": 0, 00:05:55.661 "w_mbytes_per_sec": 0 00:05:55.661 }, 00:05:55.661 "claimed": true, 00:05:55.661 "claim_type": "exclusive_write", 00:05:55.661 "zoned": false, 00:05:55.661 "supported_io_types": { 00:05:55.661 "read": true, 00:05:55.661 "write": true, 00:05:55.661 "unmap": true, 00:05:55.661 "write_zeroes": true, 00:05:55.661 "flush": true, 00:05:55.661 "reset": true, 00:05:55.661 "compare": false, 00:05:55.661 "compare_and_write": false, 00:05:55.661 "abort": true, 00:05:55.661 "nvme_admin": false, 00:05:55.661 "nvme_io": false 00:05:55.661 }, 00:05:55.661 "memory_domains": [ 00:05:55.661 { 00:05:55.661 "dma_device_id": "system", 00:05:55.661 "dma_device_type": 1 00:05:55.661 }, 00:05:55.661 { 00:05:55.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.661 "dma_device_type": 2 00:05:55.661 } 00:05:55.661 ], 00:05:55.661 "driver_specific": {} 00:05:55.661 }, 00:05:55.661 { 00:05:55.661 "name": "Passthru0", 00:05:55.661 "aliases": [ 00:05:55.661 "f8da3ea9-1fca-5f23-9e09-45b958e7fad7" 00:05:55.661 ], 00:05:55.661 "product_name": "passthru", 00:05:55.661 "block_size": 512, 00:05:55.661 "num_blocks": 16384, 00:05:55.661 "uuid": "f8da3ea9-1fca-5f23-9e09-45b958e7fad7", 00:05:55.661 "assigned_rate_limits": { 00:05:55.661 "rw_ios_per_sec": 0, 00:05:55.661 "rw_mbytes_per_sec": 0, 00:05:55.661 "r_mbytes_per_sec": 0, 00:05:55.661 "w_mbytes_per_sec": 0 00:05:55.661 }, 00:05:55.661 "claimed": false, 00:05:55.661 "zoned": false, 00:05:55.661 "supported_io_types": { 00:05:55.661 "read": true, 00:05:55.661 "write": true, 00:05:55.661 "unmap": true, 00:05:55.661 "write_zeroes": true, 00:05:55.661 "flush": true, 00:05:55.661 "reset": true, 00:05:55.661 "compare": false, 00:05:55.661 "compare_and_write": false, 00:05:55.661 "abort": true, 00:05:55.661 "nvme_admin": false, 00:05:55.661 "nvme_io": false 00:05:55.661 }, 00:05:55.661 "memory_domains": [ 00:05:55.661 { 00:05:55.661 "dma_device_id": "system", 00:05:55.661 "dma_device_type": 1 00:05:55.661 }, 00:05:55.661 { 00:05:55.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.661 "dma_device_type": 2 00:05:55.661 } 00:05:55.661 ], 00:05:55.661 "driver_specific": { 00:05:55.661 "passthru": { 00:05:55.661 "name": "Passthru0", 00:05:55.661 "base_bdev_name": "Malloc0" 00:05:55.661 } 00:05:55.661 } 00:05:55.661 } 00:05:55.661 ]' 00:05:55.661 22:35:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:55.661 22:35:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:55.661 22:35:47 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:55.661 22:35:47 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.661 22:35:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.661 22:35:47 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.661 22:35:47 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:55.661 22:35:47 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.661 22:35:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.661 22:35:47 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.661 22:35:47 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:55.661 22:35:47 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.661 22:35:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.661 22:35:48 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.661 22:35:48 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:55.661 22:35:48 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:55.661 22:35:48 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:55.661 00:05:55.661 real 0m0.235s 00:05:55.661 user 0m0.154s 00:05:55.661 sys 0m0.021s 00:05:55.661 22:35:48 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:55.661 22:35:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.661 ************************************ 00:05:55.661 END TEST rpc_integrity 00:05:55.661 ************************************ 00:05:55.661 22:35:48 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:55.661 22:35:48 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:55.661 22:35:48 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:55.661 22:35:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.661 ************************************ 00:05:55.661 START TEST rpc_plugins 00:05:55.661 ************************************ 00:05:55.661 22:35:48 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:55.661 22:35:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:55.661 22:35:48 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.661 22:35:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:55.661 22:35:48 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.661 22:35:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:55.661 22:35:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:55.661 22:35:48 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.661 22:35:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:55.661 22:35:48 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.661 22:35:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:55.661 { 00:05:55.661 "name": "Malloc1", 00:05:55.661 "aliases": [ 00:05:55.661 "0fc5fb51-bded-4e26-b657-d9c62b08e913" 00:05:55.661 ], 00:05:55.661 "product_name": "Malloc disk", 00:05:55.661 "block_size": 4096, 00:05:55.661 "num_blocks": 256, 00:05:55.661 "uuid": "0fc5fb51-bded-4e26-b657-d9c62b08e913", 00:05:55.661 "assigned_rate_limits": { 00:05:55.661 "rw_ios_per_sec": 0, 00:05:55.661 "rw_mbytes_per_sec": 0, 00:05:55.661 "r_mbytes_per_sec": 0, 00:05:55.661 "w_mbytes_per_sec": 0 00:05:55.661 }, 00:05:55.661 "claimed": false, 00:05:55.661 "zoned": false, 00:05:55.661 "supported_io_types": { 00:05:55.661 "read": true, 00:05:55.661 "write": true, 00:05:55.661 "unmap": true, 00:05:55.661 "write_zeroes": true, 00:05:55.661 "flush": true, 00:05:55.661 "reset": true, 00:05:55.661 "compare": false, 00:05:55.661 "compare_and_write": false, 00:05:55.661 "abort": true, 00:05:55.661 "nvme_admin": false, 00:05:55.661 "nvme_io": false 00:05:55.661 }, 00:05:55.661 "memory_domains": [ 00:05:55.661 { 00:05:55.661 "dma_device_id": "system", 00:05:55.661 "dma_device_type": 1 00:05:55.661 }, 00:05:55.661 { 00:05:55.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.661 "dma_device_type": 2 00:05:55.661 } 00:05:55.661 ], 00:05:55.661 "driver_specific": {} 00:05:55.661 } 00:05:55.661 ]' 00:05:55.661 22:35:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:55.661 22:35:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:55.661 22:35:48 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:55.661 22:35:48 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.661 22:35:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:55.661 22:35:48 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.661 22:35:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:55.661 22:35:48 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.661 22:35:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:55.919 22:35:48 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.919 22:35:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:55.919 22:35:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:55.919 22:35:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:55.919 00:05:55.919 real 0m0.105s 00:05:55.919 user 0m0.066s 00:05:55.919 sys 0m0.012s 00:05:55.919 22:35:48 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:55.919 22:35:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:55.919 ************************************ 00:05:55.919 END TEST rpc_plugins 00:05:55.919 ************************************ 00:05:55.919 22:35:48 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:55.919 22:35:48 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:55.919 22:35:48 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:55.919 22:35:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.919 ************************************ 00:05:55.919 START TEST rpc_trace_cmd_test 00:05:55.919 ************************************ 00:05:55.919 22:35:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:55.919 22:35:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:55.919 22:35:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:55.919 22:35:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.919 22:35:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:55.919 22:35:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.919 22:35:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:55.919 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3408373", 00:05:55.919 "tpoint_group_mask": "0x8", 00:05:55.919 "iscsi_conn": { 00:05:55.919 "mask": "0x2", 00:05:55.919 "tpoint_mask": "0x0" 00:05:55.919 }, 00:05:55.919 "scsi": { 00:05:55.919 "mask": "0x4", 00:05:55.919 "tpoint_mask": "0x0" 00:05:55.919 }, 00:05:55.919 "bdev": { 00:05:55.919 "mask": "0x8", 00:05:55.919 "tpoint_mask": "0xffffffffffffffff" 00:05:55.919 }, 00:05:55.919 "nvmf_rdma": { 00:05:55.919 "mask": "0x10", 00:05:55.919 "tpoint_mask": "0x0" 00:05:55.919 }, 00:05:55.919 "nvmf_tcp": { 00:05:55.919 "mask": "0x20", 00:05:55.919 "tpoint_mask": "0x0" 00:05:55.919 }, 00:05:55.919 "ftl": { 00:05:55.919 "mask": "0x40", 00:05:55.919 "tpoint_mask": "0x0" 00:05:55.919 }, 00:05:55.919 "blobfs": { 00:05:55.919 "mask": "0x80", 00:05:55.919 "tpoint_mask": "0x0" 00:05:55.919 }, 00:05:55.919 "dsa": { 00:05:55.919 "mask": "0x200", 00:05:55.919 "tpoint_mask": "0x0" 00:05:55.919 }, 00:05:55.919 "thread": { 00:05:55.919 "mask": "0x400", 00:05:55.919 "tpoint_mask": "0x0" 00:05:55.919 }, 00:05:55.919 "nvme_pcie": { 00:05:55.919 "mask": "0x800", 00:05:55.919 "tpoint_mask": "0x0" 00:05:55.919 }, 00:05:55.919 "iaa": { 00:05:55.919 "mask": "0x1000", 00:05:55.919 "tpoint_mask": "0x0" 00:05:55.919 }, 00:05:55.919 "nvme_tcp": { 00:05:55.919 "mask": "0x2000", 00:05:55.919 "tpoint_mask": "0x0" 00:05:55.919 }, 00:05:55.919 "bdev_nvme": { 00:05:55.919 "mask": "0x4000", 00:05:55.919 "tpoint_mask": "0x0" 00:05:55.919 }, 00:05:55.919 "sock": { 00:05:55.919 "mask": "0x8000", 00:05:55.919 "tpoint_mask": "0x0" 00:05:55.919 } 00:05:55.919 }' 00:05:55.919 22:35:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:55.919 22:35:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:55.919 22:35:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:55.919 22:35:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:55.919 22:35:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:55.919 22:35:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:55.919 22:35:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:55.919 22:35:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:55.919 22:35:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:56.177 22:35:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:56.177 00:05:56.177 real 0m0.196s 00:05:56.177 user 0m0.170s 00:05:56.177 sys 0m0.017s 00:05:56.177 22:35:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:56.177 22:35:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:56.177 ************************************ 00:05:56.177 END TEST rpc_trace_cmd_test 00:05:56.177 ************************************ 00:05:56.177 22:35:48 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:56.177 22:35:48 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:56.177 22:35:48 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:56.177 22:35:48 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:56.177 22:35:48 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.177 22:35:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.177 ************************************ 00:05:56.177 START TEST rpc_daemon_integrity 00:05:56.177 ************************************ 00:05:56.177 22:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:56.177 22:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:56.177 22:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.177 22:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.177 22:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.177 22:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:56.177 22:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:56.177 22:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:56.177 22:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:56.177 22:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.177 22:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.177 22:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.177 22:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:56.177 22:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:56.177 22:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.177 22:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.177 22:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.177 22:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:56.177 { 00:05:56.177 "name": "Malloc2", 00:05:56.177 "aliases": [ 00:05:56.177 "9bb1d8dc-ba02-4fba-aa72-05c62c5ead1d" 00:05:56.177 ], 00:05:56.177 "product_name": "Malloc disk", 00:05:56.178 "block_size": 512, 00:05:56.178 "num_blocks": 16384, 00:05:56.178 "uuid": "9bb1d8dc-ba02-4fba-aa72-05c62c5ead1d", 00:05:56.178 "assigned_rate_limits": { 00:05:56.178 "rw_ios_per_sec": 0, 00:05:56.178 "rw_mbytes_per_sec": 0, 00:05:56.178 "r_mbytes_per_sec": 0, 00:05:56.178 "w_mbytes_per_sec": 0 00:05:56.178 }, 00:05:56.178 "claimed": false, 00:05:56.178 "zoned": false, 00:05:56.178 "supported_io_types": { 00:05:56.178 "read": true, 00:05:56.178 "write": true, 00:05:56.178 "unmap": true, 00:05:56.178 "write_zeroes": true, 00:05:56.178 "flush": true, 00:05:56.178 "reset": true, 00:05:56.178 "compare": false, 00:05:56.178 "compare_and_write": false, 00:05:56.178 "abort": true, 00:05:56.178 "nvme_admin": false, 00:05:56.178 "nvme_io": false 00:05:56.178 }, 00:05:56.178 "memory_domains": [ 00:05:56.178 { 00:05:56.178 "dma_device_id": "system", 00:05:56.178 "dma_device_type": 1 00:05:56.178 }, 00:05:56.178 { 00:05:56.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:56.178 "dma_device_type": 2 00:05:56.178 } 00:05:56.178 ], 00:05:56.178 "driver_specific": {} 00:05:56.178 } 00:05:56.178 ]' 00:05:56.178 22:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:56.178 22:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:56.178 22:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:56.178 22:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.178 22:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.178 [2024-07-26 22:35:48.593402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:56.178 [2024-07-26 22:35:48.593450] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:56.178 [2024-07-26 22:35:48.593473] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d38600 00:05:56.178 [2024-07-26 22:35:48.593488] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:56.178 [2024-07-26 22:35:48.594933] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:56.178 [2024-07-26 22:35:48.594962] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:56.178 Passthru0 00:05:56.178 22:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.178 22:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:56.178 22:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.178 22:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.178 22:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.178 22:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:56.178 { 00:05:56.178 "name": "Malloc2", 00:05:56.178 "aliases": [ 00:05:56.178 "9bb1d8dc-ba02-4fba-aa72-05c62c5ead1d" 00:05:56.178 ], 00:05:56.178 "product_name": "Malloc disk", 00:05:56.178 "block_size": 512, 00:05:56.178 "num_blocks": 16384, 00:05:56.178 "uuid": "9bb1d8dc-ba02-4fba-aa72-05c62c5ead1d", 00:05:56.178 "assigned_rate_limits": { 00:05:56.178 "rw_ios_per_sec": 0, 00:05:56.178 "rw_mbytes_per_sec": 0, 00:05:56.178 "r_mbytes_per_sec": 0, 00:05:56.178 "w_mbytes_per_sec": 0 00:05:56.178 }, 00:05:56.178 "claimed": true, 00:05:56.178 "claim_type": "exclusive_write", 00:05:56.178 "zoned": false, 00:05:56.178 "supported_io_types": { 00:05:56.178 "read": true, 00:05:56.178 "write": true, 00:05:56.178 "unmap": true, 00:05:56.178 "write_zeroes": true, 00:05:56.178 "flush": true, 00:05:56.178 "reset": true, 00:05:56.178 "compare": false, 00:05:56.178 "compare_and_write": false, 00:05:56.178 "abort": true, 00:05:56.178 "nvme_admin": false, 00:05:56.178 "nvme_io": false 00:05:56.178 }, 00:05:56.178 "memory_domains": [ 00:05:56.178 { 00:05:56.178 "dma_device_id": "system", 00:05:56.178 "dma_device_type": 1 00:05:56.178 }, 00:05:56.178 { 00:05:56.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:56.178 "dma_device_type": 2 00:05:56.178 } 00:05:56.178 ], 00:05:56.178 "driver_specific": {} 00:05:56.178 }, 00:05:56.178 { 00:05:56.178 "name": "Passthru0", 00:05:56.178 "aliases": [ 00:05:56.178 "effc0574-6239-5991-bd54-804bdebb8010" 00:05:56.178 ], 00:05:56.178 "product_name": "passthru", 00:05:56.178 "block_size": 512, 00:05:56.178 "num_blocks": 16384, 00:05:56.178 "uuid": "effc0574-6239-5991-bd54-804bdebb8010", 00:05:56.178 "assigned_rate_limits": { 00:05:56.178 "rw_ios_per_sec": 0, 00:05:56.178 "rw_mbytes_per_sec": 0, 00:05:56.178 "r_mbytes_per_sec": 0, 00:05:56.178 "w_mbytes_per_sec": 0 00:05:56.178 }, 00:05:56.178 "claimed": false, 00:05:56.178 "zoned": false, 00:05:56.178 "supported_io_types": { 00:05:56.178 "read": true, 00:05:56.178 "write": true, 00:05:56.178 "unmap": true, 00:05:56.178 "write_zeroes": true, 00:05:56.178 "flush": true, 00:05:56.178 "reset": true, 00:05:56.178 "compare": false, 00:05:56.178 "compare_and_write": false, 00:05:56.178 "abort": true, 00:05:56.178 "nvme_admin": false, 00:05:56.178 "nvme_io": false 00:05:56.178 }, 00:05:56.178 "memory_domains": [ 00:05:56.178 { 00:05:56.178 "dma_device_id": "system", 00:05:56.178 "dma_device_type": 1 00:05:56.178 }, 00:05:56.178 { 00:05:56.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:56.178 "dma_device_type": 2 00:05:56.178 } 00:05:56.178 ], 00:05:56.178 "driver_specific": { 00:05:56.178 "passthru": { 00:05:56.178 "name": "Passthru0", 00:05:56.178 "base_bdev_name": "Malloc2" 00:05:56.178 } 00:05:56.178 } 00:05:56.178 } 00:05:56.178 ]' 00:05:56.178 22:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:56.178 22:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:56.178 22:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:56.178 22:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.178 22:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.178 22:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.178 22:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:56.178 22:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.178 22:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.178 22:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.178 22:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:56.178 22:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.178 22:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.178 22:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.178 22:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:56.178 22:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:56.436 22:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:56.436 00:05:56.436 real 0m0.222s 00:05:56.436 user 0m0.147s 00:05:56.436 sys 0m0.021s 00:05:56.436 22:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:56.436 22:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.436 ************************************ 00:05:56.436 END TEST rpc_daemon_integrity 00:05:56.436 ************************************ 00:05:56.436 22:35:48 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:56.436 22:35:48 rpc -- rpc/rpc.sh@84 -- # killprocess 3408373 00:05:56.436 22:35:48 rpc -- common/autotest_common.sh@946 -- # '[' -z 3408373 ']' 00:05:56.436 22:35:48 rpc -- common/autotest_common.sh@950 -- # kill -0 3408373 00:05:56.436 22:35:48 rpc -- common/autotest_common.sh@951 -- # uname 00:05:56.436 22:35:48 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:56.436 22:35:48 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3408373 00:05:56.436 22:35:48 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:56.436 22:35:48 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:56.436 22:35:48 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3408373' 00:05:56.436 killing process with pid 3408373 00:05:56.436 22:35:48 rpc -- common/autotest_common.sh@965 -- # kill 3408373 00:05:56.436 22:35:48 rpc -- common/autotest_common.sh@970 -- # wait 3408373 00:05:56.714 00:05:56.714 real 0m1.874s 00:05:56.714 user 0m2.379s 00:05:56.714 sys 0m0.563s 00:05:56.714 22:35:49 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:56.714 22:35:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.714 ************************************ 00:05:56.714 END TEST rpc 00:05:56.714 ************************************ 00:05:56.714 22:35:49 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:56.714 22:35:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:56.714 22:35:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.714 22:35:49 -- common/autotest_common.sh@10 -- # set +x 00:05:56.982 ************************************ 00:05:56.982 START TEST skip_rpc 00:05:56.982 ************************************ 00:05:56.982 22:35:49 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:56.982 * Looking for test storage... 00:05:56.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:56.982 22:35:49 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:56.982 22:35:49 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:56.982 22:35:49 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:56.982 22:35:49 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:56.982 22:35:49 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.982 22:35:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.982 ************************************ 00:05:56.982 START TEST skip_rpc 00:05:56.982 ************************************ 00:05:56.982 22:35:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:56.982 22:35:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3408811 00:05:56.982 22:35:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:56.982 22:35:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:56.982 22:35:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:56.982 [2024-07-26 22:35:49.335270] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:56.982 [2024-07-26 22:35:49.335378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3408811 ] 00:05:56.982 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.982 [2024-07-26 22:35:49.391734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.982 [2024-07-26 22:35:49.479806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.241 22:35:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:02.241 22:35:54 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:02.241 22:35:54 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:02.241 22:35:54 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:02.241 22:35:54 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.242 22:35:54 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:02.242 22:35:54 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.242 22:35:54 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:02.242 22:35:54 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.242 22:35:54 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.242 22:35:54 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:02.242 22:35:54 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:02.242 22:35:54 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:02.242 22:35:54 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:02.242 22:35:54 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:02.242 22:35:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:02.242 22:35:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3408811 00:06:02.242 22:35:54 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 3408811 ']' 00:06:02.242 22:35:54 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 3408811 00:06:02.242 22:35:54 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:06:02.242 22:35:54 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:02.242 22:35:54 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3408811 00:06:02.242 22:35:54 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:02.242 22:35:54 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:02.242 22:35:54 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3408811' 00:06:02.242 killing process with pid 3408811 00:06:02.242 22:35:54 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 3408811 00:06:02.242 22:35:54 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 3408811 00:06:02.242 00:06:02.242 real 0m5.441s 00:06:02.242 user 0m5.125s 00:06:02.242 sys 0m0.323s 00:06:02.242 22:35:54 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:02.242 22:35:54 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.242 ************************************ 00:06:02.242 END TEST skip_rpc 00:06:02.242 ************************************ 00:06:02.500 22:35:54 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:02.500 22:35:54 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:02.500 22:35:54 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:02.500 22:35:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.500 ************************************ 00:06:02.500 START TEST skip_rpc_with_json 00:06:02.500 ************************************ 00:06:02.500 22:35:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:06:02.500 22:35:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:02.500 22:35:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3409498 00:06:02.500 22:35:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.500 22:35:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:02.500 22:35:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3409498 00:06:02.500 22:35:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 3409498 ']' 00:06:02.500 22:35:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.500 22:35:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:02.500 22:35:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.500 22:35:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:02.500 22:35:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:02.500 [2024-07-26 22:35:54.824696] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:02.501 [2024-07-26 22:35:54.824778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3409498 ] 00:06:02.501 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.501 [2024-07-26 22:35:54.887034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.501 [2024-07-26 22:35:54.977726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.759 22:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:02.759 22:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:06:02.759 22:35:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:02.759 22:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.759 22:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:02.759 [2024-07-26 22:35:55.236981] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:02.759 request: 00:06:02.759 { 00:06:02.759 "trtype": "tcp", 00:06:02.759 "method": "nvmf_get_transports", 00:06:02.759 "req_id": 1 00:06:02.759 } 00:06:02.759 Got JSON-RPC error response 00:06:02.759 response: 00:06:02.759 { 00:06:02.759 "code": -19, 00:06:02.759 "message": "No such device" 00:06:02.759 } 00:06:02.759 22:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:02.759 22:35:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:02.759 22:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.759 22:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:02.759 [2024-07-26 22:35:55.245117] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:02.759 22:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.759 22:35:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:02.759 22:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.759 22:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:03.017 22:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.017 22:35:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:03.017 { 00:06:03.017 "subsystems": [ 00:06:03.017 { 00:06:03.017 "subsystem": "vfio_user_target", 00:06:03.017 "config": null 00:06:03.017 }, 00:06:03.017 { 00:06:03.017 "subsystem": "keyring", 00:06:03.017 "config": [] 00:06:03.017 }, 00:06:03.017 { 00:06:03.017 "subsystem": "iobuf", 00:06:03.017 "config": [ 00:06:03.017 { 00:06:03.017 "method": "iobuf_set_options", 00:06:03.017 "params": { 00:06:03.017 "small_pool_count": 8192, 00:06:03.017 "large_pool_count": 1024, 00:06:03.017 "small_bufsize": 8192, 00:06:03.017 "large_bufsize": 135168 00:06:03.017 } 00:06:03.017 } 00:06:03.017 ] 00:06:03.017 }, 00:06:03.017 { 00:06:03.017 "subsystem": "sock", 00:06:03.017 "config": [ 00:06:03.017 { 00:06:03.017 "method": "sock_set_default_impl", 00:06:03.017 "params": { 00:06:03.017 "impl_name": "posix" 00:06:03.017 } 00:06:03.017 }, 00:06:03.017 { 00:06:03.017 "method": "sock_impl_set_options", 00:06:03.017 "params": { 00:06:03.017 "impl_name": "ssl", 00:06:03.017 "recv_buf_size": 4096, 00:06:03.017 "send_buf_size": 4096, 00:06:03.017 "enable_recv_pipe": true, 00:06:03.017 "enable_quickack": false, 00:06:03.017 "enable_placement_id": 0, 00:06:03.017 "enable_zerocopy_send_server": true, 00:06:03.017 "enable_zerocopy_send_client": false, 00:06:03.017 "zerocopy_threshold": 0, 00:06:03.017 "tls_version": 0, 00:06:03.017 "enable_ktls": false 00:06:03.017 } 00:06:03.017 }, 00:06:03.017 { 00:06:03.017 "method": "sock_impl_set_options", 00:06:03.017 "params": { 00:06:03.017 "impl_name": "posix", 00:06:03.017 "recv_buf_size": 2097152, 00:06:03.017 "send_buf_size": 2097152, 00:06:03.017 "enable_recv_pipe": true, 00:06:03.017 "enable_quickack": false, 00:06:03.017 "enable_placement_id": 0, 00:06:03.017 "enable_zerocopy_send_server": true, 00:06:03.017 "enable_zerocopy_send_client": false, 00:06:03.017 "zerocopy_threshold": 0, 00:06:03.017 "tls_version": 0, 00:06:03.017 "enable_ktls": false 00:06:03.017 } 00:06:03.017 } 00:06:03.017 ] 00:06:03.017 }, 00:06:03.017 { 00:06:03.017 "subsystem": "vmd", 00:06:03.017 "config": [] 00:06:03.017 }, 00:06:03.017 { 00:06:03.017 "subsystem": "accel", 00:06:03.017 "config": [ 00:06:03.017 { 00:06:03.017 "method": "accel_set_options", 00:06:03.017 "params": { 00:06:03.017 "small_cache_size": 128, 00:06:03.017 "large_cache_size": 16, 00:06:03.017 "task_count": 2048, 00:06:03.017 "sequence_count": 2048, 00:06:03.017 "buf_count": 2048 00:06:03.017 } 00:06:03.017 } 00:06:03.017 ] 00:06:03.017 }, 00:06:03.017 { 00:06:03.017 "subsystem": "bdev", 00:06:03.017 "config": [ 00:06:03.017 { 00:06:03.017 "method": "bdev_set_options", 00:06:03.017 "params": { 00:06:03.017 "bdev_io_pool_size": 65535, 00:06:03.017 "bdev_io_cache_size": 256, 00:06:03.017 "bdev_auto_examine": true, 00:06:03.017 "iobuf_small_cache_size": 128, 00:06:03.017 "iobuf_large_cache_size": 16 00:06:03.017 } 00:06:03.017 }, 00:06:03.017 { 00:06:03.017 "method": "bdev_raid_set_options", 00:06:03.017 "params": { 00:06:03.017 "process_window_size_kb": 1024 00:06:03.017 } 00:06:03.017 }, 00:06:03.017 { 00:06:03.017 "method": "bdev_iscsi_set_options", 00:06:03.017 "params": { 00:06:03.017 "timeout_sec": 30 00:06:03.017 } 00:06:03.017 }, 00:06:03.017 { 00:06:03.017 "method": "bdev_nvme_set_options", 00:06:03.017 "params": { 00:06:03.017 "action_on_timeout": "none", 00:06:03.017 "timeout_us": 0, 00:06:03.017 "timeout_admin_us": 0, 00:06:03.017 "keep_alive_timeout_ms": 10000, 00:06:03.017 "arbitration_burst": 0, 00:06:03.017 "low_priority_weight": 0, 00:06:03.017 "medium_priority_weight": 0, 00:06:03.017 "high_priority_weight": 0, 00:06:03.017 "nvme_adminq_poll_period_us": 10000, 00:06:03.017 "nvme_ioq_poll_period_us": 0, 00:06:03.017 "io_queue_requests": 0, 00:06:03.017 "delay_cmd_submit": true, 00:06:03.017 "transport_retry_count": 4, 00:06:03.017 "bdev_retry_count": 3, 00:06:03.017 "transport_ack_timeout": 0, 00:06:03.017 "ctrlr_loss_timeout_sec": 0, 00:06:03.017 "reconnect_delay_sec": 0, 00:06:03.017 "fast_io_fail_timeout_sec": 0, 00:06:03.017 "disable_auto_failback": false, 00:06:03.017 "generate_uuids": false, 00:06:03.017 "transport_tos": 0, 00:06:03.017 "nvme_error_stat": false, 00:06:03.017 "rdma_srq_size": 0, 00:06:03.017 "io_path_stat": false, 00:06:03.017 "allow_accel_sequence": false, 00:06:03.017 "rdma_max_cq_size": 0, 00:06:03.017 "rdma_cm_event_timeout_ms": 0, 00:06:03.017 "dhchap_digests": [ 00:06:03.017 "sha256", 00:06:03.017 "sha384", 00:06:03.017 "sha512" 00:06:03.017 ], 00:06:03.017 "dhchap_dhgroups": [ 00:06:03.017 "null", 00:06:03.017 "ffdhe2048", 00:06:03.017 "ffdhe3072", 00:06:03.017 "ffdhe4096", 00:06:03.017 "ffdhe6144", 00:06:03.017 "ffdhe8192" 00:06:03.017 ] 00:06:03.017 } 00:06:03.017 }, 00:06:03.017 { 00:06:03.017 "method": "bdev_nvme_set_hotplug", 00:06:03.017 "params": { 00:06:03.017 "period_us": 100000, 00:06:03.017 "enable": false 00:06:03.017 } 00:06:03.017 }, 00:06:03.017 { 00:06:03.017 "method": "bdev_wait_for_examine" 00:06:03.017 } 00:06:03.017 ] 00:06:03.017 }, 00:06:03.017 { 00:06:03.017 "subsystem": "scsi", 00:06:03.017 "config": null 00:06:03.017 }, 00:06:03.017 { 00:06:03.017 "subsystem": "scheduler", 00:06:03.017 "config": [ 00:06:03.017 { 00:06:03.017 "method": "framework_set_scheduler", 00:06:03.017 "params": { 00:06:03.017 "name": "static" 00:06:03.017 } 00:06:03.017 } 00:06:03.017 ] 00:06:03.017 }, 00:06:03.017 { 00:06:03.017 "subsystem": "vhost_scsi", 00:06:03.017 "config": [] 00:06:03.017 }, 00:06:03.017 { 00:06:03.017 "subsystem": "vhost_blk", 00:06:03.017 "config": [] 00:06:03.017 }, 00:06:03.017 { 00:06:03.017 "subsystem": "ublk", 00:06:03.017 "config": [] 00:06:03.017 }, 00:06:03.017 { 00:06:03.017 "subsystem": "nbd", 00:06:03.017 "config": [] 00:06:03.017 }, 00:06:03.017 { 00:06:03.017 "subsystem": "nvmf", 00:06:03.017 "config": [ 00:06:03.017 { 00:06:03.017 "method": "nvmf_set_config", 00:06:03.017 "params": { 00:06:03.017 "discovery_filter": "match_any", 00:06:03.017 "admin_cmd_passthru": { 00:06:03.017 "identify_ctrlr": false 00:06:03.017 } 00:06:03.017 } 00:06:03.017 }, 00:06:03.017 { 00:06:03.017 "method": "nvmf_set_max_subsystems", 00:06:03.017 "params": { 00:06:03.017 "max_subsystems": 1024 00:06:03.017 } 00:06:03.017 }, 00:06:03.017 { 00:06:03.017 "method": "nvmf_set_crdt", 00:06:03.017 "params": { 00:06:03.017 "crdt1": 0, 00:06:03.017 "crdt2": 0, 00:06:03.017 "crdt3": 0 00:06:03.017 } 00:06:03.017 }, 00:06:03.017 { 00:06:03.017 "method": "nvmf_create_transport", 00:06:03.017 "params": { 00:06:03.017 "trtype": "TCP", 00:06:03.017 "max_queue_depth": 128, 00:06:03.017 "max_io_qpairs_per_ctrlr": 127, 00:06:03.017 "in_capsule_data_size": 4096, 00:06:03.017 "max_io_size": 131072, 00:06:03.017 "io_unit_size": 131072, 00:06:03.017 "max_aq_depth": 128, 00:06:03.017 "num_shared_buffers": 511, 00:06:03.017 "buf_cache_size": 4294967295, 00:06:03.017 "dif_insert_or_strip": false, 00:06:03.017 "zcopy": false, 00:06:03.017 "c2h_success": true, 00:06:03.017 "sock_priority": 0, 00:06:03.017 "abort_timeout_sec": 1, 00:06:03.017 "ack_timeout": 0, 00:06:03.017 "data_wr_pool_size": 0 00:06:03.017 } 00:06:03.017 } 00:06:03.017 ] 00:06:03.017 }, 00:06:03.017 { 00:06:03.017 "subsystem": "iscsi", 00:06:03.017 "config": [ 00:06:03.017 { 00:06:03.017 "method": "iscsi_set_options", 00:06:03.017 "params": { 00:06:03.018 "node_base": "iqn.2016-06.io.spdk", 00:06:03.018 "max_sessions": 128, 00:06:03.018 "max_connections_per_session": 2, 00:06:03.018 "max_queue_depth": 64, 00:06:03.018 "default_time2wait": 2, 00:06:03.018 "default_time2retain": 20, 00:06:03.018 "first_burst_length": 8192, 00:06:03.018 "immediate_data": true, 00:06:03.018 "allow_duplicated_isid": false, 00:06:03.018 "error_recovery_level": 0, 00:06:03.018 "nop_timeout": 60, 00:06:03.018 "nop_in_interval": 30, 00:06:03.018 "disable_chap": false, 00:06:03.018 "require_chap": false, 00:06:03.018 "mutual_chap": false, 00:06:03.018 "chap_group": 0, 00:06:03.018 "max_large_datain_per_connection": 64, 00:06:03.018 "max_r2t_per_connection": 4, 00:06:03.018 "pdu_pool_size": 36864, 00:06:03.018 "immediate_data_pool_size": 16384, 00:06:03.018 "data_out_pool_size": 2048 00:06:03.018 } 00:06:03.018 } 00:06:03.018 ] 00:06:03.018 } 00:06:03.018 ] 00:06:03.018 } 00:06:03.018 22:35:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:03.018 22:35:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3409498 00:06:03.018 22:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3409498 ']' 00:06:03.018 22:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3409498 00:06:03.018 22:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:06:03.018 22:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:03.018 22:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3409498 00:06:03.018 22:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:03.018 22:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:03.018 22:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3409498' 00:06:03.018 killing process with pid 3409498 00:06:03.018 22:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3409498 00:06:03.018 22:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3409498 00:06:03.584 22:35:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3409642 00:06:03.584 22:35:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:03.584 22:35:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:08.846 22:36:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3409642 00:06:08.846 22:36:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3409642 ']' 00:06:08.846 22:36:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3409642 00:06:08.846 22:36:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:06:08.846 22:36:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:08.846 22:36:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3409642 00:06:08.846 22:36:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:08.846 22:36:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:08.846 22:36:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3409642' 00:06:08.846 killing process with pid 3409642 00:06:08.846 22:36:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3409642 00:06:08.846 22:36:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3409642 00:06:08.846 22:36:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:08.846 22:36:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:08.846 00:06:08.846 real 0m6.488s 00:06:08.846 user 0m6.068s 00:06:08.846 sys 0m0.709s 00:06:08.846 22:36:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:08.846 22:36:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:08.846 ************************************ 00:06:08.846 END TEST skip_rpc_with_json 00:06:08.846 ************************************ 00:06:08.846 22:36:01 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:08.846 22:36:01 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:08.846 22:36:01 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:08.846 22:36:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.846 ************************************ 00:06:08.846 START TEST skip_rpc_with_delay 00:06:08.846 ************************************ 00:06:08.846 22:36:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:06:08.846 22:36:01 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:08.846 22:36:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:08.846 22:36:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:08.846 22:36:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:08.846 22:36:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.846 22:36:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:08.846 22:36:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.846 22:36:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:08.846 22:36:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.846 22:36:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:08.846 22:36:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:08.846 22:36:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:09.104 [2024-07-26 22:36:01.354137] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:09.104 [2024-07-26 22:36:01.354243] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:09.104 22:36:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:09.104 22:36:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:09.104 22:36:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:09.104 22:36:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:09.104 00:06:09.104 real 0m0.064s 00:06:09.104 user 0m0.047s 00:06:09.104 sys 0m0.017s 00:06:09.104 22:36:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:09.104 22:36:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:09.104 ************************************ 00:06:09.104 END TEST skip_rpc_with_delay 00:06:09.104 ************************************ 00:06:09.104 22:36:01 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:09.104 22:36:01 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:09.104 22:36:01 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:09.104 22:36:01 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:09.104 22:36:01 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:09.104 22:36:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.104 ************************************ 00:06:09.104 START TEST exit_on_failed_rpc_init 00:06:09.104 ************************************ 00:06:09.104 22:36:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:06:09.104 22:36:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3410442 00:06:09.104 22:36:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.104 22:36:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3410442 00:06:09.104 22:36:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 3410442 ']' 00:06:09.104 22:36:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.104 22:36:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:09.104 22:36:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.104 22:36:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:09.104 22:36:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:09.104 [2024-07-26 22:36:01.463764] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:09.104 [2024-07-26 22:36:01.463859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3410442 ] 00:06:09.104 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.104 [2024-07-26 22:36:01.525591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.361 [2024-07-26 22:36:01.619874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.619 22:36:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:09.619 22:36:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:06:09.619 22:36:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:09.619 22:36:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:09.619 22:36:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:09.619 22:36:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:09.619 22:36:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:09.619 22:36:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:09.619 22:36:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:09.619 22:36:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:09.619 22:36:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:09.619 22:36:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:09.619 22:36:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:09.619 22:36:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:09.619 22:36:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:09.619 [2024-07-26 22:36:01.925024] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:09.619 [2024-07-26 22:36:01.925134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3410479 ] 00:06:09.619 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.619 [2024-07-26 22:36:01.985773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.619 [2024-07-26 22:36:02.079386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.619 [2024-07-26 22:36:02.079539] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:09.619 [2024-07-26 22:36:02.079559] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:09.619 [2024-07-26 22:36:02.079570] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:09.876 22:36:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:09.876 22:36:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:09.876 22:36:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:09.876 22:36:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:09.876 22:36:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:09.876 22:36:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:09.876 22:36:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:09.876 22:36:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3410442 00:06:09.876 22:36:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 3410442 ']' 00:06:09.876 22:36:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 3410442 00:06:09.876 22:36:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:06:09.876 22:36:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:09.876 22:36:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3410442 00:06:09.876 22:36:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:09.876 22:36:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:09.876 22:36:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3410442' 00:06:09.876 killing process with pid 3410442 00:06:09.876 22:36:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 3410442 00:06:09.876 22:36:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 3410442 00:06:10.135 00:06:10.135 real 0m1.180s 00:06:10.135 user 0m1.326s 00:06:10.135 sys 0m0.443s 00:06:10.135 22:36:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:10.135 22:36:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:10.135 ************************************ 00:06:10.135 END TEST exit_on_failed_rpc_init 00:06:10.135 ************************************ 00:06:10.135 22:36:02 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:10.135 00:06:10.135 real 0m13.405s 00:06:10.135 user 0m12.652s 00:06:10.135 sys 0m1.654s 00:06:10.135 22:36:02 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:10.135 22:36:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.135 ************************************ 00:06:10.135 END TEST skip_rpc 00:06:10.135 ************************************ 00:06:10.394 22:36:02 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:10.394 22:36:02 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:10.394 22:36:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:10.394 22:36:02 -- common/autotest_common.sh@10 -- # set +x 00:06:10.394 ************************************ 00:06:10.394 START TEST rpc_client 00:06:10.394 ************************************ 00:06:10.394 22:36:02 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:10.394 * Looking for test storage... 00:06:10.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:10.394 22:36:02 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:10.394 OK 00:06:10.394 22:36:02 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:10.394 00:06:10.394 real 0m0.065s 00:06:10.394 user 0m0.022s 00:06:10.394 sys 0m0.047s 00:06:10.394 22:36:02 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:10.394 22:36:02 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:10.394 ************************************ 00:06:10.394 END TEST rpc_client 00:06:10.394 ************************************ 00:06:10.394 22:36:02 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:10.394 22:36:02 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:10.394 22:36:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:10.394 22:36:02 -- common/autotest_common.sh@10 -- # set +x 00:06:10.394 ************************************ 00:06:10.394 START TEST json_config 00:06:10.394 ************************************ 00:06:10.394 22:36:02 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:10.394 22:36:02 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:10.394 22:36:02 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:10.394 22:36:02 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:10.394 22:36:02 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:10.394 22:36:02 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:10.394 22:36:02 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:10.394 22:36:02 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:10.394 22:36:02 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:10.394 22:36:02 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:10.394 22:36:02 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:10.394 22:36:02 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:10.394 22:36:02 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:10.394 22:36:02 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:10.394 22:36:02 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:10.394 22:36:02 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:10.394 22:36:02 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:10.394 22:36:02 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:10.394 22:36:02 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:10.394 22:36:02 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:10.394 22:36:02 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.394 22:36:02 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.394 22:36:02 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.394 22:36:02 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.394 22:36:02 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.394 22:36:02 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.394 22:36:02 json_config -- paths/export.sh@5 -- # export PATH 00:06:10.394 22:36:02 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.394 22:36:02 json_config -- nvmf/common.sh@47 -- # : 0 00:06:10.394 22:36:02 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:10.395 22:36:02 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:10.395 22:36:02 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:10.395 22:36:02 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:10.395 22:36:02 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:10.395 22:36:02 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:10.395 22:36:02 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:10.395 22:36:02 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:10.395 22:36:02 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:10.395 22:36:02 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:10.395 22:36:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:10.395 22:36:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:10.395 22:36:02 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:10.395 22:36:02 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:10.395 22:36:02 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:10.395 22:36:02 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:10.395 22:36:02 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:10.395 22:36:02 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:10.395 22:36:02 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:10.395 22:36:02 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:10.395 22:36:02 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:10.395 22:36:02 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:10.395 22:36:02 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:10.395 22:36:02 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:06:10.395 INFO: JSON configuration test init 00:06:10.395 22:36:02 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:06:10.395 22:36:02 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:06:10.395 22:36:02 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:10.395 22:36:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.395 22:36:02 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:06:10.395 22:36:02 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:10.395 22:36:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.395 22:36:02 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:06:10.395 22:36:02 json_config -- json_config/common.sh@9 -- # local app=target 00:06:10.395 22:36:02 json_config -- json_config/common.sh@10 -- # shift 00:06:10.395 22:36:02 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:10.395 22:36:02 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:10.395 22:36:02 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:10.395 22:36:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.395 22:36:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.395 22:36:02 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3410721 00:06:10.395 22:36:02 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:10.395 22:36:02 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:10.395 Waiting for target to run... 00:06:10.395 22:36:02 json_config -- json_config/common.sh@25 -- # waitforlisten 3410721 /var/tmp/spdk_tgt.sock 00:06:10.395 22:36:02 json_config -- common/autotest_common.sh@827 -- # '[' -z 3410721 ']' 00:06:10.395 22:36:02 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:10.395 22:36:02 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:10.395 22:36:02 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:10.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:10.395 22:36:02 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:10.395 22:36:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.395 [2024-07-26 22:36:02.886250] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:10.395 [2024-07-26 22:36:02.886348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3410721 ] 00:06:10.653 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.914 [2024-07-26 22:36:03.249100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.914 [2024-07-26 22:36:03.313204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.480 22:36:03 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:11.480 22:36:03 json_config -- common/autotest_common.sh@860 -- # return 0 00:06:11.480 22:36:03 json_config -- json_config/common.sh@26 -- # echo '' 00:06:11.480 00:06:11.480 22:36:03 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:06:11.480 22:36:03 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:06:11.480 22:36:03 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:11.480 22:36:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.480 22:36:03 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:06:11.480 22:36:03 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:06:11.480 22:36:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:11.480 22:36:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.480 22:36:03 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:11.480 22:36:03 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:06:11.480 22:36:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:14.762 22:36:06 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:14.762 22:36:06 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:14.762 22:36:06 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:14.762 22:36:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.762 22:36:06 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:14.762 22:36:06 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:14.762 22:36:06 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:14.762 22:36:06 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:14.762 22:36:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:14.762 22:36:06 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:14.762 22:36:07 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:14.762 22:36:07 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:14.762 22:36:07 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:14.762 22:36:07 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:14.762 22:36:07 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:14.762 22:36:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.020 22:36:07 json_config -- json_config/json_config.sh@55 -- # return 0 00:06:15.020 22:36:07 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:15.020 22:36:07 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:15.020 22:36:07 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:15.020 22:36:07 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:15.020 22:36:07 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:15.020 22:36:07 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:15.020 22:36:07 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:15.020 22:36:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.020 22:36:07 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:15.020 22:36:07 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:06:15.020 22:36:07 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:06:15.020 22:36:07 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:15.020 22:36:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:15.020 MallocForNvmf0 00:06:15.020 22:36:07 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:15.020 22:36:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:15.585 MallocForNvmf1 00:06:15.585 22:36:07 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:15.585 22:36:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:15.585 [2024-07-26 22:36:08.041254] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:15.585 22:36:08 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:15.585 22:36:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:15.843 22:36:08 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:15.843 22:36:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:16.100 22:36:08 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:16.100 22:36:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:16.357 22:36:08 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:16.357 22:36:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:16.614 [2024-07-26 22:36:09.052601] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:16.614 22:36:09 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:16.614 22:36:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:16.614 22:36:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.614 22:36:09 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:16.614 22:36:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:16.614 22:36:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.614 22:36:09 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:16.615 22:36:09 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:16.615 22:36:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:16.871 MallocBdevForConfigChangeCheck 00:06:16.871 22:36:09 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:16.871 22:36:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:16.871 22:36:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.871 22:36:09 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:16.871 22:36:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:17.437 22:36:09 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:17.437 INFO: shutting down applications... 00:06:17.437 22:36:09 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:17.437 22:36:09 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:17.437 22:36:09 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:17.437 22:36:09 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:19.390 Calling clear_iscsi_subsystem 00:06:19.390 Calling clear_nvmf_subsystem 00:06:19.390 Calling clear_nbd_subsystem 00:06:19.390 Calling clear_ublk_subsystem 00:06:19.390 Calling clear_vhost_blk_subsystem 00:06:19.390 Calling clear_vhost_scsi_subsystem 00:06:19.390 Calling clear_bdev_subsystem 00:06:19.390 22:36:11 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:19.390 22:36:11 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:19.390 22:36:11 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:19.390 22:36:11 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:19.390 22:36:11 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:19.390 22:36:11 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:19.390 22:36:11 json_config -- json_config/json_config.sh@345 -- # break 00:06:19.390 22:36:11 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:19.390 22:36:11 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:19.390 22:36:11 json_config -- json_config/common.sh@31 -- # local app=target 00:06:19.390 22:36:11 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:19.390 22:36:11 json_config -- json_config/common.sh@35 -- # [[ -n 3410721 ]] 00:06:19.390 22:36:11 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3410721 00:06:19.390 22:36:11 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:19.390 22:36:11 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:19.390 22:36:11 json_config -- json_config/common.sh@41 -- # kill -0 3410721 00:06:19.390 22:36:11 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:19.957 22:36:12 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:19.957 22:36:12 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:19.957 22:36:12 json_config -- json_config/common.sh@41 -- # kill -0 3410721 00:06:19.957 22:36:12 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:19.957 22:36:12 json_config -- json_config/common.sh@43 -- # break 00:06:19.957 22:36:12 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:19.957 22:36:12 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:19.957 SPDK target shutdown done 00:06:19.957 22:36:12 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:19.957 INFO: relaunching applications... 00:06:19.957 22:36:12 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:19.957 22:36:12 json_config -- json_config/common.sh@9 -- # local app=target 00:06:19.957 22:36:12 json_config -- json_config/common.sh@10 -- # shift 00:06:19.957 22:36:12 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:19.957 22:36:12 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:19.957 22:36:12 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:19.957 22:36:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:19.957 22:36:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:19.957 22:36:12 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3412422 00:06:19.957 22:36:12 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:19.957 22:36:12 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:19.957 Waiting for target to run... 00:06:19.957 22:36:12 json_config -- json_config/common.sh@25 -- # waitforlisten 3412422 /var/tmp/spdk_tgt.sock 00:06:19.957 22:36:12 json_config -- common/autotest_common.sh@827 -- # '[' -z 3412422 ']' 00:06:19.957 22:36:12 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:19.957 22:36:12 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:19.957 22:36:12 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:19.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:19.957 22:36:12 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:19.957 22:36:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.957 [2024-07-26 22:36:12.305778] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:19.957 [2024-07-26 22:36:12.305884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3412422 ] 00:06:19.957 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.215 [2024-07-26 22:36:12.662242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.474 [2024-07-26 22:36:12.725825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.755 [2024-07-26 22:36:15.753970] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:23.755 [2024-07-26 22:36:15.786428] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:23.755 22:36:15 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:23.755 22:36:15 json_config -- common/autotest_common.sh@860 -- # return 0 00:06:23.755 22:36:15 json_config -- json_config/common.sh@26 -- # echo '' 00:06:23.755 00:06:23.755 22:36:15 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:23.755 22:36:15 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:23.755 INFO: Checking if target configuration is the same... 00:06:23.755 22:36:15 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:23.756 22:36:15 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:23.756 22:36:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:23.756 + '[' 2 -ne 2 ']' 00:06:23.756 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:23.756 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:23.756 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:23.756 +++ basename /dev/fd/62 00:06:23.756 ++ mktemp /tmp/62.XXX 00:06:23.756 + tmp_file_1=/tmp/62.fsN 00:06:23.756 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:23.756 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:23.756 + tmp_file_2=/tmp/spdk_tgt_config.json.xdm 00:06:23.756 + ret=0 00:06:23.756 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:23.756 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:23.756 + diff -u /tmp/62.fsN /tmp/spdk_tgt_config.json.xdm 00:06:23.756 + echo 'INFO: JSON config files are the same' 00:06:23.756 INFO: JSON config files are the same 00:06:23.756 + rm /tmp/62.fsN /tmp/spdk_tgt_config.json.xdm 00:06:23.756 + exit 0 00:06:23.756 22:36:16 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:23.756 22:36:16 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:23.756 INFO: changing configuration and checking if this can be detected... 00:06:23.756 22:36:16 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:23.756 22:36:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:24.014 22:36:16 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:24.014 22:36:16 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:24.014 22:36:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:24.014 + '[' 2 -ne 2 ']' 00:06:24.014 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:24.014 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:24.014 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:24.014 +++ basename /dev/fd/62 00:06:24.014 ++ mktemp /tmp/62.XXX 00:06:24.014 + tmp_file_1=/tmp/62.FFR 00:06:24.014 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:24.014 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:24.014 + tmp_file_2=/tmp/spdk_tgt_config.json.IHW 00:06:24.014 + ret=0 00:06:24.014 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:24.581 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:24.581 + diff -u /tmp/62.FFR /tmp/spdk_tgt_config.json.IHW 00:06:24.581 + ret=1 00:06:24.581 + echo '=== Start of file: /tmp/62.FFR ===' 00:06:24.581 + cat /tmp/62.FFR 00:06:24.581 + echo '=== End of file: /tmp/62.FFR ===' 00:06:24.581 + echo '' 00:06:24.581 + echo '=== Start of file: /tmp/spdk_tgt_config.json.IHW ===' 00:06:24.581 + cat /tmp/spdk_tgt_config.json.IHW 00:06:24.581 + echo '=== End of file: /tmp/spdk_tgt_config.json.IHW ===' 00:06:24.581 + echo '' 00:06:24.581 + rm /tmp/62.FFR /tmp/spdk_tgt_config.json.IHW 00:06:24.581 + exit 1 00:06:24.581 22:36:16 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:24.581 INFO: configuration change detected. 00:06:24.581 22:36:16 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:24.581 22:36:16 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:24.581 22:36:16 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:24.581 22:36:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.581 22:36:16 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:24.581 22:36:16 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:24.581 22:36:16 json_config -- json_config/json_config.sh@317 -- # [[ -n 3412422 ]] 00:06:24.581 22:36:16 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:24.581 22:36:16 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:24.581 22:36:16 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:24.581 22:36:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.581 22:36:16 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:24.581 22:36:16 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:24.581 22:36:16 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:24.581 22:36:16 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:24.581 22:36:16 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:24.581 22:36:16 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:24.581 22:36:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:24.581 22:36:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.581 22:36:16 json_config -- json_config/json_config.sh@323 -- # killprocess 3412422 00:06:24.581 22:36:16 json_config -- common/autotest_common.sh@946 -- # '[' -z 3412422 ']' 00:06:24.581 22:36:16 json_config -- common/autotest_common.sh@950 -- # kill -0 3412422 00:06:24.581 22:36:16 json_config -- common/autotest_common.sh@951 -- # uname 00:06:24.581 22:36:16 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:24.581 22:36:16 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3412422 00:06:24.581 22:36:17 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:24.581 22:36:17 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:24.581 22:36:17 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3412422' 00:06:24.581 killing process with pid 3412422 00:06:24.581 22:36:17 json_config -- common/autotest_common.sh@965 -- # kill 3412422 00:06:24.581 22:36:17 json_config -- common/autotest_common.sh@970 -- # wait 3412422 00:06:26.480 22:36:18 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:26.480 22:36:18 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:26.480 22:36:18 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:26.480 22:36:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:26.480 22:36:18 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:26.480 22:36:18 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:26.480 INFO: Success 00:06:26.480 00:06:26.480 real 0m15.847s 00:06:26.480 user 0m17.851s 00:06:26.480 sys 0m1.842s 00:06:26.480 22:36:18 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:26.480 22:36:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:26.480 ************************************ 00:06:26.480 END TEST json_config 00:06:26.480 ************************************ 00:06:26.480 22:36:18 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:26.480 22:36:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:26.480 22:36:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:26.480 22:36:18 -- common/autotest_common.sh@10 -- # set +x 00:06:26.480 ************************************ 00:06:26.480 START TEST json_config_extra_key 00:06:26.480 ************************************ 00:06:26.480 22:36:18 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:26.480 22:36:18 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:26.480 22:36:18 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:26.480 22:36:18 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:26.480 22:36:18 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:26.480 22:36:18 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:26.480 22:36:18 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:26.480 22:36:18 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:26.480 22:36:18 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:26.480 22:36:18 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:26.480 22:36:18 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:26.480 22:36:18 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:26.481 22:36:18 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:26.481 22:36:18 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:26.481 22:36:18 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:26.481 22:36:18 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:26.481 22:36:18 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:26.481 22:36:18 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:26.481 22:36:18 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:26.481 22:36:18 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:26.481 22:36:18 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:26.481 22:36:18 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:26.481 22:36:18 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:26.481 22:36:18 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.481 22:36:18 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.481 22:36:18 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.481 22:36:18 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:26.481 22:36:18 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.481 22:36:18 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:26.481 22:36:18 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:26.481 22:36:18 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:26.481 22:36:18 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:26.481 22:36:18 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:26.481 22:36:18 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:26.481 22:36:18 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:26.481 22:36:18 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:26.481 22:36:18 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:26.481 22:36:18 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:26.481 22:36:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:26.481 22:36:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:26.481 22:36:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:26.481 22:36:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:26.481 22:36:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:26.481 22:36:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:26.481 22:36:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:26.481 22:36:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:26.481 22:36:18 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:26.481 22:36:18 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:26.481 INFO: launching applications... 00:06:26.481 22:36:18 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:26.481 22:36:18 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:26.481 22:36:18 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:26.481 22:36:18 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:26.481 22:36:18 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:26.481 22:36:18 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:26.481 22:36:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:26.481 22:36:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:26.481 22:36:18 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3413324 00:06:26.481 22:36:18 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:26.481 22:36:18 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:26.481 Waiting for target to run... 00:06:26.481 22:36:18 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3413324 /var/tmp/spdk_tgt.sock 00:06:26.481 22:36:18 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 3413324 ']' 00:06:26.481 22:36:18 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:26.481 22:36:18 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:26.481 22:36:18 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:26.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:26.481 22:36:18 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:26.481 22:36:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:26.481 [2024-07-26 22:36:18.769764] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:26.481 [2024-07-26 22:36:18.769866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3413324 ] 00:06:26.481 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.739 [2024-07-26 22:36:19.104595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.739 [2024-07-26 22:36:19.168097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.304 22:36:19 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:27.304 22:36:19 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:06:27.304 22:36:19 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:27.304 00:06:27.304 22:36:19 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:27.304 INFO: shutting down applications... 00:06:27.304 22:36:19 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:27.304 22:36:19 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:27.304 22:36:19 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:27.304 22:36:19 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3413324 ]] 00:06:27.304 22:36:19 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3413324 00:06:27.304 22:36:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:27.304 22:36:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:27.304 22:36:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3413324 00:06:27.304 22:36:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:27.869 22:36:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:27.869 22:36:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:27.869 22:36:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3413324 00:06:27.869 22:36:20 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:27.869 22:36:20 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:27.869 22:36:20 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:27.869 22:36:20 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:27.869 SPDK target shutdown done 00:06:27.869 22:36:20 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:27.869 Success 00:06:27.869 00:06:27.869 real 0m1.538s 00:06:27.869 user 0m1.493s 00:06:27.869 sys 0m0.429s 00:06:27.869 22:36:20 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:27.869 22:36:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:27.869 ************************************ 00:06:27.869 END TEST json_config_extra_key 00:06:27.869 ************************************ 00:06:27.869 22:36:20 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:27.869 22:36:20 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:27.869 22:36:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:27.869 22:36:20 -- common/autotest_common.sh@10 -- # set +x 00:06:27.869 ************************************ 00:06:27.869 START TEST alias_rpc 00:06:27.869 ************************************ 00:06:27.869 22:36:20 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:27.869 * Looking for test storage... 00:06:27.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:27.869 22:36:20 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:27.869 22:36:20 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3413638 00:06:27.869 22:36:20 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:27.869 22:36:20 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3413638 00:06:27.869 22:36:20 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 3413638 ']' 00:06:27.869 22:36:20 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.870 22:36:20 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:27.870 22:36:20 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.870 22:36:20 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:27.870 22:36:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.870 [2024-07-26 22:36:20.353496] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:27.870 [2024-07-26 22:36:20.353592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3413638 ] 00:06:28.127 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.127 [2024-07-26 22:36:20.413096] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.127 [2024-07-26 22:36:20.497473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.385 22:36:20 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:28.385 22:36:20 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:28.385 22:36:20 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:28.642 22:36:21 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3413638 00:06:28.642 22:36:21 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 3413638 ']' 00:06:28.642 22:36:21 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 3413638 00:06:28.642 22:36:21 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:06:28.642 22:36:21 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:28.642 22:36:21 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3413638 00:06:28.642 22:36:21 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:28.642 22:36:21 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:28.642 22:36:21 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3413638' 00:06:28.642 killing process with pid 3413638 00:06:28.642 22:36:21 alias_rpc -- common/autotest_common.sh@965 -- # kill 3413638 00:06:28.642 22:36:21 alias_rpc -- common/autotest_common.sh@970 -- # wait 3413638 00:06:29.207 00:06:29.207 real 0m1.174s 00:06:29.207 user 0m1.253s 00:06:29.207 sys 0m0.425s 00:06:29.207 22:36:21 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:29.207 22:36:21 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.207 ************************************ 00:06:29.207 END TEST alias_rpc 00:06:29.207 ************************************ 00:06:29.207 22:36:21 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:29.207 22:36:21 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:29.207 22:36:21 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:29.207 22:36:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:29.207 22:36:21 -- common/autotest_common.sh@10 -- # set +x 00:06:29.207 ************************************ 00:06:29.207 START TEST spdkcli_tcp 00:06:29.207 ************************************ 00:06:29.207 22:36:21 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:29.207 * Looking for test storage... 00:06:29.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:29.208 22:36:21 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:29.208 22:36:21 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:29.208 22:36:21 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:29.208 22:36:21 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:29.208 22:36:21 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:29.208 22:36:21 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:29.208 22:36:21 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:29.208 22:36:21 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:29.208 22:36:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:29.208 22:36:21 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3413824 00:06:29.208 22:36:21 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:29.208 22:36:21 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3413824 00:06:29.208 22:36:21 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 3413824 ']' 00:06:29.208 22:36:21 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.208 22:36:21 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:29.208 22:36:21 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.208 22:36:21 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:29.208 22:36:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:29.208 [2024-07-26 22:36:21.571488] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:29.208 [2024-07-26 22:36:21.571564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3413824 ] 00:06:29.208 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.208 [2024-07-26 22:36:21.627922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:29.466 [2024-07-26 22:36:21.713475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.466 [2024-07-26 22:36:21.713479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.724 22:36:21 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:29.724 22:36:21 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:06:29.724 22:36:21 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3413828 00:06:29.724 22:36:21 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:29.724 22:36:21 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:29.724 [ 00:06:29.724 "bdev_malloc_delete", 00:06:29.724 "bdev_malloc_create", 00:06:29.724 "bdev_null_resize", 00:06:29.724 "bdev_null_delete", 00:06:29.724 "bdev_null_create", 00:06:29.724 "bdev_nvme_cuse_unregister", 00:06:29.724 "bdev_nvme_cuse_register", 00:06:29.724 "bdev_opal_new_user", 00:06:29.724 "bdev_opal_set_lock_state", 00:06:29.724 "bdev_opal_delete", 00:06:29.724 "bdev_opal_get_info", 00:06:29.724 "bdev_opal_create", 00:06:29.724 "bdev_nvme_opal_revert", 00:06:29.724 "bdev_nvme_opal_init", 00:06:29.724 "bdev_nvme_send_cmd", 00:06:29.724 "bdev_nvme_get_path_iostat", 00:06:29.724 "bdev_nvme_get_mdns_discovery_info", 00:06:29.724 "bdev_nvme_stop_mdns_discovery", 00:06:29.724 "bdev_nvme_start_mdns_discovery", 00:06:29.724 "bdev_nvme_set_multipath_policy", 00:06:29.724 "bdev_nvme_set_preferred_path", 00:06:29.724 "bdev_nvme_get_io_paths", 00:06:29.724 "bdev_nvme_remove_error_injection", 00:06:29.724 "bdev_nvme_add_error_injection", 00:06:29.724 "bdev_nvme_get_discovery_info", 00:06:29.724 "bdev_nvme_stop_discovery", 00:06:29.724 "bdev_nvme_start_discovery", 00:06:29.724 "bdev_nvme_get_controller_health_info", 00:06:29.724 "bdev_nvme_disable_controller", 00:06:29.724 "bdev_nvme_enable_controller", 00:06:29.724 "bdev_nvme_reset_controller", 00:06:29.724 "bdev_nvme_get_transport_statistics", 00:06:29.724 "bdev_nvme_apply_firmware", 00:06:29.724 "bdev_nvme_detach_controller", 00:06:29.724 "bdev_nvme_get_controllers", 00:06:29.724 "bdev_nvme_attach_controller", 00:06:29.724 "bdev_nvme_set_hotplug", 00:06:29.724 "bdev_nvme_set_options", 00:06:29.724 "bdev_passthru_delete", 00:06:29.724 "bdev_passthru_create", 00:06:29.724 "bdev_lvol_set_parent_bdev", 00:06:29.724 "bdev_lvol_set_parent", 00:06:29.724 "bdev_lvol_check_shallow_copy", 00:06:29.724 "bdev_lvol_start_shallow_copy", 00:06:29.724 "bdev_lvol_grow_lvstore", 00:06:29.724 "bdev_lvol_get_lvols", 00:06:29.724 "bdev_lvol_get_lvstores", 00:06:29.724 "bdev_lvol_delete", 00:06:29.724 "bdev_lvol_set_read_only", 00:06:29.724 "bdev_lvol_resize", 00:06:29.724 "bdev_lvol_decouple_parent", 00:06:29.724 "bdev_lvol_inflate", 00:06:29.724 "bdev_lvol_rename", 00:06:29.724 "bdev_lvol_clone_bdev", 00:06:29.724 "bdev_lvol_clone", 00:06:29.724 "bdev_lvol_snapshot", 00:06:29.724 "bdev_lvol_create", 00:06:29.724 "bdev_lvol_delete_lvstore", 00:06:29.724 "bdev_lvol_rename_lvstore", 00:06:29.724 "bdev_lvol_create_lvstore", 00:06:29.724 "bdev_raid_set_options", 00:06:29.724 "bdev_raid_remove_base_bdev", 00:06:29.724 "bdev_raid_add_base_bdev", 00:06:29.724 "bdev_raid_delete", 00:06:29.724 "bdev_raid_create", 00:06:29.724 "bdev_raid_get_bdevs", 00:06:29.724 "bdev_error_inject_error", 00:06:29.724 "bdev_error_delete", 00:06:29.724 "bdev_error_create", 00:06:29.724 "bdev_split_delete", 00:06:29.724 "bdev_split_create", 00:06:29.724 "bdev_delay_delete", 00:06:29.724 "bdev_delay_create", 00:06:29.724 "bdev_delay_update_latency", 00:06:29.724 "bdev_zone_block_delete", 00:06:29.724 "bdev_zone_block_create", 00:06:29.724 "blobfs_create", 00:06:29.724 "blobfs_detect", 00:06:29.724 "blobfs_set_cache_size", 00:06:29.724 "bdev_aio_delete", 00:06:29.724 "bdev_aio_rescan", 00:06:29.724 "bdev_aio_create", 00:06:29.724 "bdev_ftl_set_property", 00:06:29.724 "bdev_ftl_get_properties", 00:06:29.724 "bdev_ftl_get_stats", 00:06:29.724 "bdev_ftl_unmap", 00:06:29.724 "bdev_ftl_unload", 00:06:29.724 "bdev_ftl_delete", 00:06:29.724 "bdev_ftl_load", 00:06:29.724 "bdev_ftl_create", 00:06:29.724 "bdev_virtio_attach_controller", 00:06:29.724 "bdev_virtio_scsi_get_devices", 00:06:29.724 "bdev_virtio_detach_controller", 00:06:29.724 "bdev_virtio_blk_set_hotplug", 00:06:29.724 "bdev_iscsi_delete", 00:06:29.724 "bdev_iscsi_create", 00:06:29.724 "bdev_iscsi_set_options", 00:06:29.724 "accel_error_inject_error", 00:06:29.724 "ioat_scan_accel_module", 00:06:29.724 "dsa_scan_accel_module", 00:06:29.724 "iaa_scan_accel_module", 00:06:29.724 "vfu_virtio_create_scsi_endpoint", 00:06:29.724 "vfu_virtio_scsi_remove_target", 00:06:29.724 "vfu_virtio_scsi_add_target", 00:06:29.724 "vfu_virtio_create_blk_endpoint", 00:06:29.724 "vfu_virtio_delete_endpoint", 00:06:29.724 "keyring_file_remove_key", 00:06:29.724 "keyring_file_add_key", 00:06:29.724 "keyring_linux_set_options", 00:06:29.724 "iscsi_get_histogram", 00:06:29.724 "iscsi_enable_histogram", 00:06:29.724 "iscsi_set_options", 00:06:29.724 "iscsi_get_auth_groups", 00:06:29.724 "iscsi_auth_group_remove_secret", 00:06:29.724 "iscsi_auth_group_add_secret", 00:06:29.724 "iscsi_delete_auth_group", 00:06:29.724 "iscsi_create_auth_group", 00:06:29.724 "iscsi_set_discovery_auth", 00:06:29.724 "iscsi_get_options", 00:06:29.724 "iscsi_target_node_request_logout", 00:06:29.724 "iscsi_target_node_set_redirect", 00:06:29.724 "iscsi_target_node_set_auth", 00:06:29.724 "iscsi_target_node_add_lun", 00:06:29.724 "iscsi_get_stats", 00:06:29.724 "iscsi_get_connections", 00:06:29.724 "iscsi_portal_group_set_auth", 00:06:29.724 "iscsi_start_portal_group", 00:06:29.724 "iscsi_delete_portal_group", 00:06:29.724 "iscsi_create_portal_group", 00:06:29.724 "iscsi_get_portal_groups", 00:06:29.724 "iscsi_delete_target_node", 00:06:29.724 "iscsi_target_node_remove_pg_ig_maps", 00:06:29.724 "iscsi_target_node_add_pg_ig_maps", 00:06:29.724 "iscsi_create_target_node", 00:06:29.724 "iscsi_get_target_nodes", 00:06:29.724 "iscsi_delete_initiator_group", 00:06:29.724 "iscsi_initiator_group_remove_initiators", 00:06:29.724 "iscsi_initiator_group_add_initiators", 00:06:29.724 "iscsi_create_initiator_group", 00:06:29.724 "iscsi_get_initiator_groups", 00:06:29.724 "nvmf_set_crdt", 00:06:29.724 "nvmf_set_config", 00:06:29.724 "nvmf_set_max_subsystems", 00:06:29.724 "nvmf_stop_mdns_prr", 00:06:29.724 "nvmf_publish_mdns_prr", 00:06:29.724 "nvmf_subsystem_get_listeners", 00:06:29.724 "nvmf_subsystem_get_qpairs", 00:06:29.724 "nvmf_subsystem_get_controllers", 00:06:29.724 "nvmf_get_stats", 00:06:29.724 "nvmf_get_transports", 00:06:29.724 "nvmf_create_transport", 00:06:29.724 "nvmf_get_targets", 00:06:29.724 "nvmf_delete_target", 00:06:29.724 "nvmf_create_target", 00:06:29.724 "nvmf_subsystem_allow_any_host", 00:06:29.724 "nvmf_subsystem_remove_host", 00:06:29.724 "nvmf_subsystem_add_host", 00:06:29.724 "nvmf_ns_remove_host", 00:06:29.724 "nvmf_ns_add_host", 00:06:29.724 "nvmf_subsystem_remove_ns", 00:06:29.724 "nvmf_subsystem_add_ns", 00:06:29.724 "nvmf_subsystem_listener_set_ana_state", 00:06:29.724 "nvmf_discovery_get_referrals", 00:06:29.724 "nvmf_discovery_remove_referral", 00:06:29.724 "nvmf_discovery_add_referral", 00:06:29.724 "nvmf_subsystem_remove_listener", 00:06:29.724 "nvmf_subsystem_add_listener", 00:06:29.724 "nvmf_delete_subsystem", 00:06:29.724 "nvmf_create_subsystem", 00:06:29.724 "nvmf_get_subsystems", 00:06:29.724 "env_dpdk_get_mem_stats", 00:06:29.724 "nbd_get_disks", 00:06:29.724 "nbd_stop_disk", 00:06:29.724 "nbd_start_disk", 00:06:29.724 "ublk_recover_disk", 00:06:29.724 "ublk_get_disks", 00:06:29.724 "ublk_stop_disk", 00:06:29.724 "ublk_start_disk", 00:06:29.724 "ublk_destroy_target", 00:06:29.724 "ublk_create_target", 00:06:29.724 "virtio_blk_create_transport", 00:06:29.724 "virtio_blk_get_transports", 00:06:29.724 "vhost_controller_set_coalescing", 00:06:29.724 "vhost_get_controllers", 00:06:29.724 "vhost_delete_controller", 00:06:29.724 "vhost_create_blk_controller", 00:06:29.724 "vhost_scsi_controller_remove_target", 00:06:29.724 "vhost_scsi_controller_add_target", 00:06:29.724 "vhost_start_scsi_controller", 00:06:29.724 "vhost_create_scsi_controller", 00:06:29.724 "thread_set_cpumask", 00:06:29.724 "framework_get_scheduler", 00:06:29.724 "framework_set_scheduler", 00:06:29.724 "framework_get_reactors", 00:06:29.724 "thread_get_io_channels", 00:06:29.724 "thread_get_pollers", 00:06:29.724 "thread_get_stats", 00:06:29.724 "framework_monitor_context_switch", 00:06:29.724 "spdk_kill_instance", 00:06:29.724 "log_enable_timestamps", 00:06:29.724 "log_get_flags", 00:06:29.724 "log_clear_flag", 00:06:29.724 "log_set_flag", 00:06:29.724 "log_get_level", 00:06:29.724 "log_set_level", 00:06:29.724 "log_get_print_level", 00:06:29.724 "log_set_print_level", 00:06:29.724 "framework_enable_cpumask_locks", 00:06:29.724 "framework_disable_cpumask_locks", 00:06:29.725 "framework_wait_init", 00:06:29.725 "framework_start_init", 00:06:29.725 "scsi_get_devices", 00:06:29.725 "bdev_get_histogram", 00:06:29.725 "bdev_enable_histogram", 00:06:29.725 "bdev_set_qos_limit", 00:06:29.725 "bdev_set_qd_sampling_period", 00:06:29.725 "bdev_get_bdevs", 00:06:29.725 "bdev_reset_iostat", 00:06:29.725 "bdev_get_iostat", 00:06:29.725 "bdev_examine", 00:06:29.725 "bdev_wait_for_examine", 00:06:29.725 "bdev_set_options", 00:06:29.725 "notify_get_notifications", 00:06:29.725 "notify_get_types", 00:06:29.725 "accel_get_stats", 00:06:29.725 "accel_set_options", 00:06:29.725 "accel_set_driver", 00:06:29.725 "accel_crypto_key_destroy", 00:06:29.725 "accel_crypto_keys_get", 00:06:29.725 "accel_crypto_key_create", 00:06:29.725 "accel_assign_opc", 00:06:29.725 "accel_get_module_info", 00:06:29.725 "accel_get_opc_assignments", 00:06:29.725 "vmd_rescan", 00:06:29.725 "vmd_remove_device", 00:06:29.725 "vmd_enable", 00:06:29.725 "sock_get_default_impl", 00:06:29.725 "sock_set_default_impl", 00:06:29.725 "sock_impl_set_options", 00:06:29.725 "sock_impl_get_options", 00:06:29.725 "iobuf_get_stats", 00:06:29.725 "iobuf_set_options", 00:06:29.725 "keyring_get_keys", 00:06:29.725 "framework_get_pci_devices", 00:06:29.725 "framework_get_config", 00:06:29.725 "framework_get_subsystems", 00:06:29.725 "vfu_tgt_set_base_path", 00:06:29.725 "trace_get_info", 00:06:29.725 "trace_get_tpoint_group_mask", 00:06:29.725 "trace_disable_tpoint_group", 00:06:29.725 "trace_enable_tpoint_group", 00:06:29.725 "trace_clear_tpoint_mask", 00:06:29.725 "trace_set_tpoint_mask", 00:06:29.725 "spdk_get_version", 00:06:29.725 "rpc_get_methods" 00:06:29.725 ] 00:06:29.725 22:36:22 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:29.725 22:36:22 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:29.725 22:36:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:29.982 22:36:22 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:29.982 22:36:22 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3413824 00:06:29.982 22:36:22 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 3413824 ']' 00:06:29.982 22:36:22 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 3413824 00:06:29.982 22:36:22 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:06:29.982 22:36:22 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:29.982 22:36:22 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3413824 00:06:29.982 22:36:22 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:29.982 22:36:22 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:29.982 22:36:22 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3413824' 00:06:29.982 killing process with pid 3413824 00:06:29.982 22:36:22 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 3413824 00:06:29.982 22:36:22 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 3413824 00:06:30.240 00:06:30.240 real 0m1.187s 00:06:30.240 user 0m2.110s 00:06:30.240 sys 0m0.432s 00:06:30.240 22:36:22 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:30.240 22:36:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:30.240 ************************************ 00:06:30.240 END TEST spdkcli_tcp 00:06:30.240 ************************************ 00:06:30.240 22:36:22 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:30.240 22:36:22 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:30.240 22:36:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:30.240 22:36:22 -- common/autotest_common.sh@10 -- # set +x 00:06:30.240 ************************************ 00:06:30.240 START TEST dpdk_mem_utility 00:06:30.240 ************************************ 00:06:30.240 22:36:22 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:30.499 * Looking for test storage... 00:06:30.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:30.499 22:36:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:30.499 22:36:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3414026 00:06:30.499 22:36:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:30.499 22:36:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3414026 00:06:30.499 22:36:22 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 3414026 ']' 00:06:30.499 22:36:22 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.499 22:36:22 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:30.499 22:36:22 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.499 22:36:22 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:30.499 22:36:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:30.499 [2024-07-26 22:36:22.810262] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:30.499 [2024-07-26 22:36:22.810360] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3414026 ] 00:06:30.499 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.499 [2024-07-26 22:36:22.866326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.499 [2024-07-26 22:36:22.950200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.757 22:36:23 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:30.757 22:36:23 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:06:30.757 22:36:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:30.757 22:36:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:30.757 22:36:23 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.757 22:36:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:30.757 { 00:06:30.757 "filename": "/tmp/spdk_mem_dump.txt" 00:06:30.757 } 00:06:30.757 22:36:23 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.757 22:36:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:31.015 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:31.015 1 heaps totaling size 814.000000 MiB 00:06:31.015 size: 814.000000 MiB heap id: 0 00:06:31.015 end heaps---------- 00:06:31.015 8 mempools totaling size 598.116089 MiB 00:06:31.015 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:31.015 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:31.015 size: 84.521057 MiB name: bdev_io_3414026 00:06:31.015 size: 51.011292 MiB name: evtpool_3414026 00:06:31.015 size: 50.003479 MiB name: msgpool_3414026 00:06:31.015 size: 21.763794 MiB name: PDU_Pool 00:06:31.015 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:31.015 size: 0.026123 MiB name: Session_Pool 00:06:31.015 end mempools------- 00:06:31.015 6 memzones totaling size 4.142822 MiB 00:06:31.015 size: 1.000366 MiB name: RG_ring_0_3414026 00:06:31.016 size: 1.000366 MiB name: RG_ring_1_3414026 00:06:31.016 size: 1.000366 MiB name: RG_ring_4_3414026 00:06:31.016 size: 1.000366 MiB name: RG_ring_5_3414026 00:06:31.016 size: 0.125366 MiB name: RG_ring_2_3414026 00:06:31.016 size: 0.015991 MiB name: RG_ring_3_3414026 00:06:31.016 end memzones------- 00:06:31.016 22:36:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:31.016 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:31.016 list of free elements. size: 12.519348 MiB 00:06:31.016 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:31.016 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:31.016 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:31.016 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:31.016 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:31.016 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:31.016 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:31.016 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:31.016 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:31.016 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:31.016 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:31.016 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:31.016 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:31.016 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:31.016 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:31.016 list of standard malloc elements. size: 199.218079 MiB 00:06:31.016 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:31.016 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:31.016 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:31.016 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:31.016 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:31.016 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:31.016 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:31.016 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:31.016 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:31.016 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:31.016 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:31.016 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:31.016 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:31.016 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:31.016 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:31.016 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:31.016 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:31.016 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:31.016 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:31.016 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:31.016 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:31.016 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:31.016 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:31.016 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:31.016 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:31.016 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:31.016 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:31.016 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:31.016 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:31.016 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:31.016 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:31.016 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:31.016 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:31.016 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:31.016 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:31.016 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:31.016 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:31.016 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:31.016 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:31.016 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:31.016 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:31.016 list of memzone associated elements. size: 602.262573 MiB 00:06:31.016 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:31.016 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:31.016 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:31.016 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:31.016 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:31.016 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3414026_0 00:06:31.016 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:31.016 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3414026_0 00:06:31.016 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:31.016 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3414026_0 00:06:31.016 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:31.016 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:31.016 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:31.016 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:31.016 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:31.016 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3414026 00:06:31.016 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:31.016 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3414026 00:06:31.016 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:31.016 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3414026 00:06:31.016 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:31.016 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:31.016 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:31.016 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:31.016 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:31.016 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:31.016 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:31.016 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:31.016 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:31.016 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3414026 00:06:31.016 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:31.016 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3414026 00:06:31.016 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:31.016 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3414026 00:06:31.016 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:31.016 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3414026 00:06:31.016 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:31.016 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3414026 00:06:31.016 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:31.016 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:31.016 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:31.016 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:31.016 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:31.016 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:31.016 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:31.016 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3414026 00:06:31.016 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:31.016 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:31.016 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:31.016 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:31.016 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:31.016 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3414026 00:06:31.016 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:31.016 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:31.016 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:31.016 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3414026 00:06:31.016 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:31.016 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3414026 00:06:31.016 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:31.016 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:31.016 22:36:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:31.016 22:36:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3414026 00:06:31.016 22:36:23 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 3414026 ']' 00:06:31.016 22:36:23 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 3414026 00:06:31.016 22:36:23 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:06:31.016 22:36:23 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:31.016 22:36:23 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3414026 00:06:31.016 22:36:23 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:31.016 22:36:23 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:31.016 22:36:23 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3414026' 00:06:31.016 killing process with pid 3414026 00:06:31.016 22:36:23 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 3414026 00:06:31.016 22:36:23 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 3414026 00:06:31.274 00:06:31.274 real 0m1.036s 00:06:31.275 user 0m1.008s 00:06:31.275 sys 0m0.405s 00:06:31.275 22:36:23 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:31.275 22:36:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:31.275 ************************************ 00:06:31.275 END TEST dpdk_mem_utility 00:06:31.275 ************************************ 00:06:31.275 22:36:23 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:31.275 22:36:23 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:31.275 22:36:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:31.275 22:36:23 -- common/autotest_common.sh@10 -- # set +x 00:06:31.533 ************************************ 00:06:31.533 START TEST event 00:06:31.533 ************************************ 00:06:31.533 22:36:23 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:31.533 * Looking for test storage... 00:06:31.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:31.533 22:36:23 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:31.533 22:36:23 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:31.533 22:36:23 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:31.533 22:36:23 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:31.533 22:36:23 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:31.533 22:36:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:31.533 ************************************ 00:06:31.533 START TEST event_perf 00:06:31.533 ************************************ 00:06:31.533 22:36:23 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:31.533 Running I/O for 1 seconds...[2024-07-26 22:36:23.882276] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:31.533 [2024-07-26 22:36:23.882338] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3414215 ] 00:06:31.533 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.533 [2024-07-26 22:36:23.944579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:31.791 [2024-07-26 22:36:24.037229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.791 [2024-07-26 22:36:24.037283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.791 [2024-07-26 22:36:24.037368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.791 [2024-07-26 22:36:24.037371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.724 Running I/O for 1 seconds... 00:06:32.724 lcore 0: 233433 00:06:32.724 lcore 1: 233432 00:06:32.724 lcore 2: 233433 00:06:32.724 lcore 3: 233433 00:06:32.724 done. 00:06:32.724 00:06:32.724 real 0m1.252s 00:06:32.724 user 0m4.157s 00:06:32.724 sys 0m0.089s 00:06:32.724 22:36:25 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:32.724 22:36:25 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:32.724 ************************************ 00:06:32.724 END TEST event_perf 00:06:32.724 ************************************ 00:06:32.724 22:36:25 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:32.724 22:36:25 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:32.724 22:36:25 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:32.724 22:36:25 event -- common/autotest_common.sh@10 -- # set +x 00:06:32.724 ************************************ 00:06:32.724 START TEST event_reactor 00:06:32.724 ************************************ 00:06:32.724 22:36:25 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:32.724 [2024-07-26 22:36:25.186482] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:32.725 [2024-07-26 22:36:25.186550] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3414370 ] 00:06:32.725 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.983 [2024-07-26 22:36:25.251989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.983 [2024-07-26 22:36:25.343562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.356 test_start 00:06:34.356 oneshot 00:06:34.356 tick 100 00:06:34.356 tick 100 00:06:34.356 tick 250 00:06:34.356 tick 100 00:06:34.356 tick 100 00:06:34.357 tick 100 00:06:34.357 tick 250 00:06:34.357 tick 500 00:06:34.357 tick 100 00:06:34.357 tick 100 00:06:34.357 tick 250 00:06:34.357 tick 100 00:06:34.357 tick 100 00:06:34.357 test_end 00:06:34.357 00:06:34.357 real 0m1.253s 00:06:34.357 user 0m1.163s 00:06:34.357 sys 0m0.085s 00:06:34.357 22:36:26 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:34.357 22:36:26 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:34.357 ************************************ 00:06:34.357 END TEST event_reactor 00:06:34.357 ************************************ 00:06:34.357 22:36:26 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:34.357 22:36:26 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:34.357 22:36:26 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.357 22:36:26 event -- common/autotest_common.sh@10 -- # set +x 00:06:34.357 ************************************ 00:06:34.357 START TEST event_reactor_perf 00:06:34.357 ************************************ 00:06:34.357 22:36:26 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:34.357 [2024-07-26 22:36:26.487972] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:34.357 [2024-07-26 22:36:26.488034] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3414530 ] 00:06:34.357 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.357 [2024-07-26 22:36:26.551758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.357 [2024-07-26 22:36:26.644941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.292 test_start 00:06:35.293 test_end 00:06:35.293 Performance: 355782 events per second 00:06:35.293 00:06:35.293 real 0m1.251s 00:06:35.293 user 0m1.159s 00:06:35.293 sys 0m0.087s 00:06:35.293 22:36:27 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:35.293 22:36:27 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:35.293 ************************************ 00:06:35.293 END TEST event_reactor_perf 00:06:35.293 ************************************ 00:06:35.293 22:36:27 event -- event/event.sh@49 -- # uname -s 00:06:35.293 22:36:27 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:35.293 22:36:27 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:35.293 22:36:27 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:35.293 22:36:27 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:35.293 22:36:27 event -- common/autotest_common.sh@10 -- # set +x 00:06:35.293 ************************************ 00:06:35.293 START TEST event_scheduler 00:06:35.293 ************************************ 00:06:35.293 22:36:27 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:35.557 * Looking for test storage... 00:06:35.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:35.557 22:36:27 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:35.557 22:36:27 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3414708 00:06:35.557 22:36:27 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:35.557 22:36:27 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:35.557 22:36:27 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3414708 00:06:35.557 22:36:27 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 3414708 ']' 00:06:35.557 22:36:27 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.557 22:36:27 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:35.557 22:36:27 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.557 22:36:27 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:35.557 22:36:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:35.557 [2024-07-26 22:36:27.869138] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:35.557 [2024-07-26 22:36:27.869209] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3414708 ] 00:06:35.557 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.557 [2024-07-26 22:36:27.925646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:35.557 [2024-07-26 22:36:28.017511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.557 [2024-07-26 22:36:28.017586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.557 [2024-07-26 22:36:28.017644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:35.557 [2024-07-26 22:36:28.017647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.857 22:36:28 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:35.857 22:36:28 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:06:35.857 22:36:28 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:35.857 22:36:28 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.857 22:36:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:35.857 POWER: Env isn't set yet! 00:06:35.857 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:35.857 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:06:35.857 POWER: Cannot get available frequencies of lcore 0 00:06:35.857 POWER: Attempting to initialise PSTAT power management... 00:06:35.857 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:35.857 POWER: Initialized successfully for lcore 0 power management 00:06:35.857 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:35.857 POWER: Initialized successfully for lcore 1 power management 00:06:35.857 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:35.857 POWER: Initialized successfully for lcore 2 power management 00:06:35.857 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:35.857 POWER: Initialized successfully for lcore 3 power management 00:06:35.857 [2024-07-26 22:36:28.136258] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:35.857 [2024-07-26 22:36:28.136279] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:35.857 [2024-07-26 22:36:28.136290] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:35.857 22:36:28 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.857 22:36:28 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:35.857 22:36:28 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.857 22:36:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:35.857 [2024-07-26 22:36:28.234990] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:35.857 22:36:28 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.857 22:36:28 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:35.857 22:36:28 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:35.857 22:36:28 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:35.857 22:36:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:35.857 ************************************ 00:06:35.857 START TEST scheduler_create_thread 00:06:35.857 ************************************ 00:06:35.857 22:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:06:35.857 22:36:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:35.857 22:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.857 22:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.857 2 00:06:35.857 22:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.857 22:36:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:35.857 22:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.857 22:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.857 3 00:06:35.857 22:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.857 22:36:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:35.857 22:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.857 22:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.857 4 00:06:35.857 22:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.857 22:36:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:35.857 22:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.857 22:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.857 5 00:06:35.857 22:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.857 22:36:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:35.857 22:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.857 22:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.857 6 00:06:35.857 22:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.858 22:36:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:35.858 22:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.858 22:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.858 7 00:06:35.858 22:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.858 22:36:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:35.858 22:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.858 22:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.858 8 00:06:35.858 22:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.858 22:36:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:35.858 22:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.858 22:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.121 9 00:06:36.121 22:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.121 22:36:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:36.121 22:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.121 22:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.121 10 00:06:36.121 22:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.121 22:36:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:36.121 22:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.121 22:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.121 22:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.121 22:36:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:36.121 22:36:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:36.121 22:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.121 22:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.121 22:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.121 22:36:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:36.121 22:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.121 22:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.494 22:36:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.494 22:36:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:37.494 22:36:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:37.494 22:36:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.494 22:36:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.424 22:36:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.424 00:06:38.424 real 0m2.618s 00:06:38.424 user 0m0.010s 00:06:38.424 sys 0m0.004s 00:06:38.424 22:36:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:38.424 22:36:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.424 ************************************ 00:06:38.424 END TEST scheduler_create_thread 00:06:38.424 ************************************ 00:06:38.424 22:36:30 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:38.424 22:36:30 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3414708 00:06:38.424 22:36:30 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 3414708 ']' 00:06:38.424 22:36:30 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 3414708 00:06:38.424 22:36:30 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:06:38.425 22:36:30 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:38.425 22:36:30 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3414708 00:06:38.681 22:36:30 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:38.681 22:36:30 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:38.681 22:36:30 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3414708' 00:06:38.681 killing process with pid 3414708 00:06:38.681 22:36:30 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 3414708 00:06:38.681 22:36:30 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 3414708 00:06:38.938 [2024-07-26 22:36:31.362443] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:39.196 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:06:39.196 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:39.196 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:06:39.196 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:39.196 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:06:39.196 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:39.196 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:06:39.196 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:39.196 00:06:39.196 real 0m3.808s 00:06:39.196 user 0m5.846s 00:06:39.196 sys 0m0.332s 00:06:39.196 22:36:31 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:39.196 22:36:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:39.196 ************************************ 00:06:39.196 END TEST event_scheduler 00:06:39.196 ************************************ 00:06:39.196 22:36:31 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:39.196 22:36:31 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:39.196 22:36:31 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:39.196 22:36:31 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:39.196 22:36:31 event -- common/autotest_common.sh@10 -- # set +x 00:06:39.196 ************************************ 00:06:39.196 START TEST app_repeat 00:06:39.196 ************************************ 00:06:39.196 22:36:31 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:06:39.196 22:36:31 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.196 22:36:31 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.196 22:36:31 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:39.196 22:36:31 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.196 22:36:31 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:39.196 22:36:31 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:39.196 22:36:31 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:39.196 22:36:31 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3415289 00:06:39.196 22:36:31 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:39.196 22:36:31 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:39.196 22:36:31 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3415289' 00:06:39.196 Process app_repeat pid: 3415289 00:06:39.196 22:36:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:39.196 22:36:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:39.196 spdk_app_start Round 0 00:06:39.196 22:36:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3415289 /var/tmp/spdk-nbd.sock 00:06:39.196 22:36:31 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3415289 ']' 00:06:39.196 22:36:31 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:39.196 22:36:31 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:39.196 22:36:31 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:39.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:39.196 22:36:31 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:39.196 22:36:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:39.196 [2024-07-26 22:36:31.656244] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:39.197 [2024-07-26 22:36:31.656304] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3415289 ] 00:06:39.197 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.454 [2024-07-26 22:36:31.715403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:39.454 [2024-07-26 22:36:31.804991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.454 [2024-07-26 22:36:31.804994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.454 22:36:31 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:39.454 22:36:31 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:39.454 22:36:31 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:39.711 Malloc0 00:06:39.711 22:36:32 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:39.969 Malloc1 00:06:39.969 22:36:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:39.969 22:36:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.969 22:36:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.969 22:36:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:39.969 22:36:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.969 22:36:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:39.969 22:36:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:39.969 22:36:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.969 22:36:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.969 22:36:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:39.969 22:36:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.969 22:36:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:39.969 22:36:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:39.969 22:36:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:39.969 22:36:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.969 22:36:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:40.226 /dev/nbd0 00:06:40.226 22:36:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:40.226 22:36:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:40.226 22:36:32 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:40.226 22:36:32 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:40.226 22:36:32 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:40.226 22:36:32 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:40.226 22:36:32 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:40.226 22:36:32 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:40.226 22:36:32 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:40.226 22:36:32 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:40.226 22:36:32 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:40.226 1+0 records in 00:06:40.226 1+0 records out 00:06:40.226 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000150367 s, 27.2 MB/s 00:06:40.226 22:36:32 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:40.226 22:36:32 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:40.226 22:36:32 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:40.226 22:36:32 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:40.226 22:36:32 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:40.226 22:36:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.226 22:36:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.226 22:36:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:40.483 /dev/nbd1 00:06:40.483 22:36:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:40.483 22:36:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:40.483 22:36:32 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:40.483 22:36:32 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:40.483 22:36:32 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:40.483 22:36:32 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:40.483 22:36:32 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:40.740 22:36:32 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:40.740 22:36:32 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:40.740 22:36:32 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:40.740 22:36:32 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:40.740 1+0 records in 00:06:40.740 1+0 records out 00:06:40.740 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183373 s, 22.3 MB/s 00:06:40.740 22:36:33 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:40.740 22:36:33 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:40.740 22:36:33 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:40.740 22:36:33 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:40.740 22:36:33 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:40.740 22:36:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.741 22:36:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.741 22:36:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:40.741 22:36:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.741 22:36:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:40.998 { 00:06:40.998 "nbd_device": "/dev/nbd0", 00:06:40.998 "bdev_name": "Malloc0" 00:06:40.998 }, 00:06:40.998 { 00:06:40.998 "nbd_device": "/dev/nbd1", 00:06:40.998 "bdev_name": "Malloc1" 00:06:40.998 } 00:06:40.998 ]' 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:40.998 { 00:06:40.998 "nbd_device": "/dev/nbd0", 00:06:40.998 "bdev_name": "Malloc0" 00:06:40.998 }, 00:06:40.998 { 00:06:40.998 "nbd_device": "/dev/nbd1", 00:06:40.998 "bdev_name": "Malloc1" 00:06:40.998 } 00:06:40.998 ]' 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:40.998 /dev/nbd1' 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:40.998 /dev/nbd1' 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:40.998 256+0 records in 00:06:40.998 256+0 records out 00:06:40.998 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0050553 s, 207 MB/s 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:40.998 256+0 records in 00:06:40.998 256+0 records out 00:06:40.998 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206054 s, 50.9 MB/s 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:40.998 256+0 records in 00:06:40.998 256+0 records out 00:06:40.998 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247282 s, 42.4 MB/s 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.998 22:36:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:41.256 22:36:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:41.256 22:36:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:41.256 22:36:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:41.256 22:36:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.256 22:36:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.256 22:36:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:41.256 22:36:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:41.256 22:36:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.256 22:36:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.256 22:36:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:41.513 22:36:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:41.513 22:36:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:41.513 22:36:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:41.513 22:36:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.513 22:36:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.513 22:36:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:41.513 22:36:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:41.513 22:36:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.513 22:36:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.513 22:36:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.513 22:36:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.771 22:36:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:41.771 22:36:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:41.771 22:36:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.771 22:36:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:41.771 22:36:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:41.771 22:36:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.771 22:36:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:41.771 22:36:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:41.771 22:36:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:41.771 22:36:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:41.771 22:36:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:41.771 22:36:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:41.771 22:36:34 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:42.028 22:36:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:42.287 [2024-07-26 22:36:34.686944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:42.287 [2024-07-26 22:36:34.780015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.287 [2024-07-26 22:36:34.780020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.545 [2024-07-26 22:36:34.834769] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:42.545 [2024-07-26 22:36:34.834831] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:45.072 22:36:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:45.072 22:36:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:45.072 spdk_app_start Round 1 00:06:45.072 22:36:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3415289 /var/tmp/spdk-nbd.sock 00:06:45.072 22:36:37 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3415289 ']' 00:06:45.072 22:36:37 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:45.072 22:36:37 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:45.072 22:36:37 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:45.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:45.072 22:36:37 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:45.072 22:36:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:45.329 22:36:37 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:45.329 22:36:37 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:45.329 22:36:37 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:45.588 Malloc0 00:06:45.588 22:36:37 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:45.847 Malloc1 00:06:45.847 22:36:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:45.847 22:36:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.847 22:36:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.847 22:36:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:45.847 22:36:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.847 22:36:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:45.847 22:36:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:45.847 22:36:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.847 22:36:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.847 22:36:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:45.847 22:36:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.847 22:36:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:45.847 22:36:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:45.847 22:36:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:45.847 22:36:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.847 22:36:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:46.105 /dev/nbd0 00:06:46.105 22:36:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:46.105 22:36:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:46.105 22:36:38 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:46.105 22:36:38 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:46.105 22:36:38 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:46.105 22:36:38 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:46.105 22:36:38 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:46.105 22:36:38 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:46.105 22:36:38 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:46.105 22:36:38 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:46.105 22:36:38 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:46.105 1+0 records in 00:06:46.105 1+0 records out 00:06:46.105 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000153277 s, 26.7 MB/s 00:06:46.105 22:36:38 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:46.105 22:36:38 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:46.105 22:36:38 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:46.105 22:36:38 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:46.105 22:36:38 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:46.105 22:36:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.105 22:36:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.105 22:36:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:46.363 /dev/nbd1 00:06:46.363 22:36:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:46.363 22:36:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:46.363 22:36:38 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:46.363 22:36:38 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:46.363 22:36:38 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:46.363 22:36:38 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:46.363 22:36:38 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:46.363 22:36:38 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:46.363 22:36:38 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:46.363 22:36:38 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:46.363 22:36:38 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:46.363 1+0 records in 00:06:46.363 1+0 records out 00:06:46.363 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022984 s, 17.8 MB/s 00:06:46.363 22:36:38 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:46.363 22:36:38 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:46.363 22:36:38 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:46.363 22:36:38 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:46.363 22:36:38 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:46.363 22:36:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.363 22:36:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.363 22:36:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.363 22:36:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.363 22:36:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:46.622 { 00:06:46.622 "nbd_device": "/dev/nbd0", 00:06:46.622 "bdev_name": "Malloc0" 00:06:46.622 }, 00:06:46.622 { 00:06:46.622 "nbd_device": "/dev/nbd1", 00:06:46.622 "bdev_name": "Malloc1" 00:06:46.622 } 00:06:46.622 ]' 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:46.622 { 00:06:46.622 "nbd_device": "/dev/nbd0", 00:06:46.622 "bdev_name": "Malloc0" 00:06:46.622 }, 00:06:46.622 { 00:06:46.622 "nbd_device": "/dev/nbd1", 00:06:46.622 "bdev_name": "Malloc1" 00:06:46.622 } 00:06:46.622 ]' 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:46.622 /dev/nbd1' 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:46.622 /dev/nbd1' 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:46.622 256+0 records in 00:06:46.622 256+0 records out 00:06:46.622 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00496789 s, 211 MB/s 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:46.622 256+0 records in 00:06:46.622 256+0 records out 00:06:46.622 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240707 s, 43.6 MB/s 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:46.622 256+0 records in 00:06:46.622 256+0 records out 00:06:46.622 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220162 s, 47.6 MB/s 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.622 22:36:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:47.188 22:36:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:47.188 22:36:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:47.188 22:36:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:47.188 22:36:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.188 22:36:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.188 22:36:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:47.188 22:36:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:47.188 22:36:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.188 22:36:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.188 22:36:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:47.188 22:36:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:47.188 22:36:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:47.188 22:36:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:47.188 22:36:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.188 22:36:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.188 22:36:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:47.188 22:36:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:47.188 22:36:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.188 22:36:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:47.188 22:36:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.188 22:36:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:47.446 22:36:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:47.446 22:36:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:47.446 22:36:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.704 22:36:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:47.704 22:36:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:47.704 22:36:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.704 22:36:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:47.704 22:36:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:47.704 22:36:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:47.704 22:36:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:47.704 22:36:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:47.704 22:36:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:47.704 22:36:39 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:47.963 22:36:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:47.963 [2024-07-26 22:36:40.465408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:48.221 [2024-07-26 22:36:40.558084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.221 [2024-07-26 22:36:40.558090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.221 [2024-07-26 22:36:40.620857] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:48.221 [2024-07-26 22:36:40.620937] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:50.746 22:36:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:50.746 22:36:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:50.746 spdk_app_start Round 2 00:06:50.746 22:36:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3415289 /var/tmp/spdk-nbd.sock 00:06:50.746 22:36:43 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3415289 ']' 00:06:50.746 22:36:43 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:50.746 22:36:43 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:50.746 22:36:43 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:50.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:50.746 22:36:43 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:50.746 22:36:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:51.003 22:36:43 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:51.003 22:36:43 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:51.003 22:36:43 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.261 Malloc0 00:06:51.261 22:36:43 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.519 Malloc1 00:06:51.519 22:36:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.519 22:36:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.519 22:36:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.519 22:36:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:51.519 22:36:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.519 22:36:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:51.519 22:36:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.519 22:36:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.519 22:36:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.519 22:36:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:51.519 22:36:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.519 22:36:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:51.519 22:36:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:51.519 22:36:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:51.519 22:36:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.519 22:36:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:51.782 /dev/nbd0 00:06:51.782 22:36:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:51.782 22:36:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:51.782 22:36:44 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:51.782 22:36:44 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:51.782 22:36:44 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:51.782 22:36:44 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:51.782 22:36:44 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:51.782 22:36:44 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:51.782 22:36:44 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:51.782 22:36:44 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:51.782 22:36:44 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.082 1+0 records in 00:06:52.082 1+0 records out 00:06:52.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000145769 s, 28.1 MB/s 00:06:52.082 22:36:44 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:52.082 22:36:44 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:52.082 22:36:44 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:52.082 22:36:44 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:52.082 22:36:44 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:52.082 22:36:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.082 22:36:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.082 22:36:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:52.082 /dev/nbd1 00:06:52.082 22:36:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:52.082 22:36:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:52.082 22:36:44 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:52.082 22:36:44 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:52.082 22:36:44 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:52.082 22:36:44 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:52.082 22:36:44 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:52.082 22:36:44 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:52.082 22:36:44 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:52.082 22:36:44 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:52.082 22:36:44 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.082 1+0 records in 00:06:52.082 1+0 records out 00:06:52.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198489 s, 20.6 MB/s 00:06:52.082 22:36:44 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:52.082 22:36:44 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:52.082 22:36:44 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:52.341 22:36:44 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:52.341 22:36:44 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:52.341 22:36:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.341 22:36:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.341 22:36:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.341 22:36:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.341 22:36:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.341 22:36:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:52.341 { 00:06:52.341 "nbd_device": "/dev/nbd0", 00:06:52.341 "bdev_name": "Malloc0" 00:06:52.341 }, 00:06:52.341 { 00:06:52.341 "nbd_device": "/dev/nbd1", 00:06:52.341 "bdev_name": "Malloc1" 00:06:52.341 } 00:06:52.341 ]' 00:06:52.341 22:36:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:52.341 { 00:06:52.341 "nbd_device": "/dev/nbd0", 00:06:52.341 "bdev_name": "Malloc0" 00:06:52.341 }, 00:06:52.341 { 00:06:52.341 "nbd_device": "/dev/nbd1", 00:06:52.341 "bdev_name": "Malloc1" 00:06:52.341 } 00:06:52.341 ]' 00:06:52.341 22:36:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.599 22:36:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:52.599 /dev/nbd1' 00:06:52.599 22:36:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:52.599 /dev/nbd1' 00:06:52.599 22:36:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.599 22:36:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:52.599 22:36:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:52.599 22:36:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:52.599 22:36:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:52.599 22:36:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:52.599 22:36:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.599 22:36:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.599 22:36:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:52.599 22:36:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.599 22:36:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:52.599 22:36:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:52.599 256+0 records in 00:06:52.599 256+0 records out 00:06:52.599 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00394665 s, 266 MB/s 00:06:52.599 22:36:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.599 22:36:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:52.599 256+0 records in 00:06:52.599 256+0 records out 00:06:52.599 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269434 s, 38.9 MB/s 00:06:52.599 22:36:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.599 22:36:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:52.599 256+0 records in 00:06:52.599 256+0 records out 00:06:52.599 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0272836 s, 38.4 MB/s 00:06:52.599 22:36:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:52.599 22:36:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.599 22:36:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.599 22:36:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:52.599 22:36:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.599 22:36:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:52.599 22:36:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:52.599 22:36:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.599 22:36:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:52.599 22:36:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.599 22:36:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:52.599 22:36:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.599 22:36:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:52.599 22:36:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.599 22:36:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.599 22:36:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:52.599 22:36:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:52.599 22:36:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.599 22:36:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:52.857 22:36:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:52.857 22:36:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:52.857 22:36:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:52.857 22:36:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.857 22:36:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.857 22:36:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:52.857 22:36:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:52.857 22:36:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.857 22:36:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.857 22:36:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:53.115 22:36:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:53.115 22:36:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:53.115 22:36:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:53.115 22:36:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.115 22:36:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.115 22:36:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:53.115 22:36:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:53.115 22:36:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.115 22:36:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:53.115 22:36:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.115 22:36:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.372 22:36:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:53.372 22:36:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:53.372 22:36:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.372 22:36:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:53.372 22:36:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:53.372 22:36:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.372 22:36:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:53.372 22:36:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:53.372 22:36:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:53.372 22:36:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:53.372 22:36:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:53.372 22:36:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:53.372 22:36:45 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:53.630 22:36:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:53.888 [2024-07-26 22:36:46.244554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:53.888 [2024-07-26 22:36:46.333998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.888 [2024-07-26 22:36:46.334004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.147 [2024-07-26 22:36:46.396681] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:54.147 [2024-07-26 22:36:46.396744] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:56.674 22:36:49 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3415289 /var/tmp/spdk-nbd.sock 00:06:56.674 22:36:49 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3415289 ']' 00:06:56.674 22:36:49 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:56.674 22:36:49 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:56.674 22:36:49 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:56.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:56.674 22:36:49 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:56.674 22:36:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:56.931 22:36:49 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:56.931 22:36:49 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:56.931 22:36:49 event.app_repeat -- event/event.sh@39 -- # killprocess 3415289 00:06:56.931 22:36:49 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 3415289 ']' 00:06:56.931 22:36:49 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 3415289 00:06:56.931 22:36:49 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:06:56.931 22:36:49 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:56.931 22:36:49 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3415289 00:06:56.931 22:36:49 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:56.931 22:36:49 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:56.931 22:36:49 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3415289' 00:06:56.931 killing process with pid 3415289 00:06:56.931 22:36:49 event.app_repeat -- common/autotest_common.sh@965 -- # kill 3415289 00:06:56.931 22:36:49 event.app_repeat -- common/autotest_common.sh@970 -- # wait 3415289 00:06:57.189 spdk_app_start is called in Round 0. 00:06:57.189 Shutdown signal received, stop current app iteration 00:06:57.189 Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 reinitialization... 00:06:57.189 spdk_app_start is called in Round 1. 00:06:57.189 Shutdown signal received, stop current app iteration 00:06:57.189 Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 reinitialization... 00:06:57.189 spdk_app_start is called in Round 2. 00:06:57.189 Shutdown signal received, stop current app iteration 00:06:57.189 Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 reinitialization... 00:06:57.189 spdk_app_start is called in Round 3. 00:06:57.189 Shutdown signal received, stop current app iteration 00:06:57.189 22:36:49 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:57.189 22:36:49 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:57.189 00:06:57.189 real 0m17.854s 00:06:57.189 user 0m38.813s 00:06:57.189 sys 0m3.217s 00:06:57.189 22:36:49 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.189 22:36:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:57.189 ************************************ 00:06:57.189 END TEST app_repeat 00:06:57.189 ************************************ 00:06:57.189 22:36:49 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:57.189 22:36:49 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:57.189 22:36:49 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:57.189 22:36:49 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.189 22:36:49 event -- common/autotest_common.sh@10 -- # set +x 00:06:57.189 ************************************ 00:06:57.189 START TEST cpu_locks 00:06:57.189 ************************************ 00:06:57.189 22:36:49 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:57.189 * Looking for test storage... 00:06:57.189 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:57.189 22:36:49 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:57.189 22:36:49 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:57.189 22:36:49 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:57.189 22:36:49 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:57.189 22:36:49 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:57.189 22:36:49 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.189 22:36:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.189 ************************************ 00:06:57.189 START TEST default_locks 00:06:57.189 ************************************ 00:06:57.189 22:36:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:06:57.189 22:36:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3417638 00:06:57.189 22:36:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:57.189 22:36:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3417638 00:06:57.189 22:36:49 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3417638 ']' 00:06:57.189 22:36:49 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.189 22:36:49 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:57.189 22:36:49 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.189 22:36:49 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:57.189 22:36:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.189 [2024-07-26 22:36:49.667527] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:57.189 [2024-07-26 22:36:49.667602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3417638 ] 00:06:57.447 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.447 [2024-07-26 22:36:49.724117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.447 [2024-07-26 22:36:49.807740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.705 22:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:57.705 22:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:06:57.705 22:36:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3417638 00:06:57.705 22:36:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3417638 00:06:57.705 22:36:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:58.270 lslocks: write error 00:06:58.270 22:36:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3417638 00:06:58.270 22:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 3417638 ']' 00:06:58.270 22:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 3417638 00:06:58.270 22:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:06:58.270 22:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:58.270 22:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3417638 00:06:58.270 22:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:58.270 22:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:58.270 22:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3417638' 00:06:58.270 killing process with pid 3417638 00:06:58.270 22:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 3417638 00:06:58.270 22:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 3417638 00:06:58.530 22:36:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3417638 00:06:58.530 22:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:58.530 22:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3417638 00:06:58.530 22:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:58.530 22:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.530 22:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:58.530 22:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.530 22:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3417638 00:06:58.530 22:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3417638 ']' 00:06:58.530 22:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.530 22:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:58.530 22:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.530 22:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:58.530 22:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3417638) - No such process 00:06:58.530 ERROR: process (pid: 3417638) is no longer running 00:06:58.530 22:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:58.530 22:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:06:58.530 22:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:58.530 22:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:58.530 22:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:58.530 22:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:58.530 22:36:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:58.530 22:36:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:58.530 22:36:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:58.530 22:36:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:58.530 00:06:58.530 real 0m1.329s 00:06:58.530 user 0m1.271s 00:06:58.530 sys 0m0.551s 00:06:58.530 22:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:58.530 22:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.530 ************************************ 00:06:58.530 END TEST default_locks 00:06:58.530 ************************************ 00:06:58.530 22:36:50 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:58.530 22:36:50 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:58.530 22:36:50 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:58.530 22:36:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.530 ************************************ 00:06:58.530 START TEST default_locks_via_rpc 00:06:58.530 ************************************ 00:06:58.530 22:36:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:06:58.530 22:36:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3417800 00:06:58.530 22:36:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:58.530 22:36:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3417800 00:06:58.530 22:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3417800 ']' 00:06:58.530 22:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.530 22:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:58.530 22:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.530 22:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:58.530 22:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.789 [2024-07-26 22:36:51.051202] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:58.789 [2024-07-26 22:36:51.051287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3417800 ] 00:06:58.789 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.789 [2024-07-26 22:36:51.108964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.789 [2024-07-26 22:36:51.200259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.048 22:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:59.048 22:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:59.048 22:36:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:59.048 22:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.048 22:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.048 22:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.048 22:36:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:59.048 22:36:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:59.048 22:36:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:59.048 22:36:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:59.048 22:36:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:59.048 22:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.048 22:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.048 22:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.048 22:36:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3417800 00:06:59.048 22:36:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3417800 00:06:59.048 22:36:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:59.306 22:36:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3417800 00:06:59.306 22:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 3417800 ']' 00:06:59.306 22:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 3417800 00:06:59.306 22:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:06:59.306 22:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:59.306 22:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3417800 00:06:59.306 22:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:59.306 22:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:59.306 22:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3417800' 00:06:59.306 killing process with pid 3417800 00:06:59.306 22:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 3417800 00:06:59.306 22:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 3417800 00:06:59.872 00:06:59.872 real 0m1.197s 00:06:59.872 user 0m1.137s 00:06:59.872 sys 0m0.514s 00:06:59.872 22:36:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.872 22:36:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.872 ************************************ 00:06:59.872 END TEST default_locks_via_rpc 00:06:59.872 ************************************ 00:06:59.872 22:36:52 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:59.872 22:36:52 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:59.872 22:36:52 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.872 22:36:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.872 ************************************ 00:06:59.872 START TEST non_locking_app_on_locked_coremask 00:06:59.872 ************************************ 00:06:59.872 22:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:06:59.872 22:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3417962 00:06:59.872 22:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:59.872 22:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3417962 /var/tmp/spdk.sock 00:06:59.872 22:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3417962 ']' 00:06:59.872 22:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.872 22:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:59.872 22:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.873 22:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:59.873 22:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.873 [2024-07-26 22:36:52.294930] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:59.873 [2024-07-26 22:36:52.295025] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3417962 ] 00:06:59.873 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.873 [2024-07-26 22:36:52.351653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.131 [2024-07-26 22:36:52.441533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.389 22:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:00.389 22:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:00.389 22:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3417988 00:07:00.389 22:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:00.389 22:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3417988 /var/tmp/spdk2.sock 00:07:00.389 22:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3417988 ']' 00:07:00.389 22:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.389 22:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:00.389 22:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.390 22:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:00.390 22:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.390 [2024-07-26 22:36:52.754145] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:00.390 [2024-07-26 22:36:52.754225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3417988 ] 00:07:00.390 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.390 [2024-07-26 22:36:52.848200] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:00.390 [2024-07-26 22:36:52.848230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.648 [2024-07-26 22:36:53.026714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.214 22:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:01.214 22:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:01.214 22:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3417962 00:07:01.214 22:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3417962 00:07:01.214 22:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:01.778 lslocks: write error 00:07:01.778 22:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3417962 00:07:01.778 22:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3417962 ']' 00:07:01.778 22:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3417962 00:07:01.778 22:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:01.778 22:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:01.778 22:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3417962 00:07:01.778 22:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:01.778 22:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:01.778 22:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3417962' 00:07:01.778 killing process with pid 3417962 00:07:01.778 22:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3417962 00:07:01.778 22:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3417962 00:07:02.710 22:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3417988 00:07:02.710 22:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3417988 ']' 00:07:02.710 22:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3417988 00:07:02.710 22:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:02.710 22:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:02.710 22:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3417988 00:07:02.710 22:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:02.710 22:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:02.710 22:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3417988' 00:07:02.710 killing process with pid 3417988 00:07:02.710 22:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3417988 00:07:02.710 22:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3417988 00:07:02.968 00:07:02.968 real 0m3.222s 00:07:02.968 user 0m3.368s 00:07:02.968 sys 0m1.060s 00:07:02.968 22:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.968 22:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.968 ************************************ 00:07:02.968 END TEST non_locking_app_on_locked_coremask 00:07:02.968 ************************************ 00:07:03.227 22:36:55 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:03.227 22:36:55 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:03.227 22:36:55 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.227 22:36:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:03.227 ************************************ 00:07:03.227 START TEST locking_app_on_unlocked_coremask 00:07:03.227 ************************************ 00:07:03.227 22:36:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:07:03.227 22:36:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3418394 00:07:03.227 22:36:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:03.227 22:36:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3418394 /var/tmp/spdk.sock 00:07:03.227 22:36:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3418394 ']' 00:07:03.227 22:36:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.227 22:36:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:03.227 22:36:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.227 22:36:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:03.227 22:36:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.227 [2024-07-26 22:36:55.567176] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:03.227 [2024-07-26 22:36:55.567273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3418394 ] 00:07:03.227 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.227 [2024-07-26 22:36:55.624727] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:03.227 [2024-07-26 22:36:55.624764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.227 [2024-07-26 22:36:55.713608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.485 22:36:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:03.485 22:36:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:03.485 22:36:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3418410 00:07:03.485 22:36:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:03.485 22:36:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3418410 /var/tmp/spdk2.sock 00:07:03.485 22:36:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3418410 ']' 00:07:03.485 22:36:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:03.485 22:36:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:03.485 22:36:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:03.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:03.485 22:36:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:03.485 22:36:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.743 [2024-07-26 22:36:56.011946] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:03.743 [2024-07-26 22:36:56.012036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3418410 ] 00:07:03.743 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.743 [2024-07-26 22:36:56.104165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.001 [2024-07-26 22:36:56.293890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.566 22:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:04.566 22:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:04.566 22:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3418410 00:07:04.566 22:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3418410 00:07:04.566 22:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:05.131 lslocks: write error 00:07:05.131 22:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3418394 00:07:05.131 22:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3418394 ']' 00:07:05.131 22:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3418394 00:07:05.131 22:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:05.131 22:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:05.131 22:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3418394 00:07:05.131 22:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:05.131 22:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:05.131 22:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3418394' 00:07:05.131 killing process with pid 3418394 00:07:05.131 22:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3418394 00:07:05.131 22:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3418394 00:07:06.064 22:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3418410 00:07:06.064 22:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3418410 ']' 00:07:06.064 22:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3418410 00:07:06.064 22:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:06.064 22:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:06.064 22:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3418410 00:07:06.064 22:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:06.064 22:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:06.064 22:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3418410' 00:07:06.064 killing process with pid 3418410 00:07:06.064 22:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3418410 00:07:06.064 22:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3418410 00:07:06.322 00:07:06.322 real 0m3.242s 00:07:06.322 user 0m3.391s 00:07:06.322 sys 0m1.078s 00:07:06.322 22:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.322 22:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.322 ************************************ 00:07:06.322 END TEST locking_app_on_unlocked_coremask 00:07:06.322 ************************************ 00:07:06.322 22:36:58 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:06.322 22:36:58 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:06.322 22:36:58 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.322 22:36:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.323 ************************************ 00:07:06.323 START TEST locking_app_on_locked_coremask 00:07:06.323 ************************************ 00:07:06.323 22:36:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:07:06.323 22:36:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3418836 00:07:06.323 22:36:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:06.323 22:36:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3418836 /var/tmp/spdk.sock 00:07:06.323 22:36:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3418836 ']' 00:07:06.323 22:36:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.323 22:36:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:06.323 22:36:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.323 22:36:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:06.323 22:36:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.581 [2024-07-26 22:36:58.858692] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:06.581 [2024-07-26 22:36:58.858795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3418836 ] 00:07:06.581 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.581 [2024-07-26 22:36:58.915607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.581 [2024-07-26 22:36:59.004858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.840 22:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:06.840 22:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:06.840 22:36:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3418843 00:07:06.840 22:36:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:06.840 22:36:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3418843 /var/tmp/spdk2.sock 00:07:06.840 22:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:06.840 22:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3418843 /var/tmp/spdk2.sock 00:07:06.840 22:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:06.840 22:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.840 22:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:06.840 22:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.840 22:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3418843 /var/tmp/spdk2.sock 00:07:06.840 22:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3418843 ']' 00:07:06.840 22:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:06.840 22:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:06.840 22:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:06.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:06.840 22:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:06.840 22:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.840 [2024-07-26 22:36:59.307950] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:06.840 [2024-07-26 22:36:59.308041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3418843 ] 00:07:06.840 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.105 [2024-07-26 22:36:59.404267] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3418836 has claimed it. 00:07:07.105 [2024-07-26 22:36:59.404317] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:07.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3418843) - No such process 00:07:07.711 ERROR: process (pid: 3418843) is no longer running 00:07:07.711 22:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:07.711 22:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:07:07.711 22:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:07.711 22:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:07.711 22:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:07.711 22:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:07.711 22:37:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3418836 00:07:07.711 22:37:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3418836 00:07:07.711 22:37:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:07.969 lslocks: write error 00:07:07.969 22:37:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3418836 00:07:07.969 22:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3418836 ']' 00:07:07.969 22:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3418836 00:07:07.969 22:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:07.969 22:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:07.969 22:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3418836 00:07:07.969 22:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:07.969 22:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:07.969 22:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3418836' 00:07:07.969 killing process with pid 3418836 00:07:07.969 22:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3418836 00:07:07.969 22:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3418836 00:07:08.535 00:07:08.535 real 0m1.982s 00:07:08.535 user 0m2.162s 00:07:08.535 sys 0m0.643s 00:07:08.535 22:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:08.535 22:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.535 ************************************ 00:07:08.535 END TEST locking_app_on_locked_coremask 00:07:08.535 ************************************ 00:07:08.535 22:37:00 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:08.535 22:37:00 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:08.535 22:37:00 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.535 22:37:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.535 ************************************ 00:07:08.535 START TEST locking_overlapped_coremask 00:07:08.535 ************************************ 00:07:08.535 22:37:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:07:08.535 22:37:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3419134 00:07:08.535 22:37:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:08.535 22:37:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3419134 /var/tmp/spdk.sock 00:07:08.535 22:37:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3419134 ']' 00:07:08.535 22:37:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.535 22:37:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:08.535 22:37:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.535 22:37:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:08.535 22:37:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.535 [2024-07-26 22:37:00.888582] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:08.535 [2024-07-26 22:37:00.888685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3419134 ] 00:07:08.535 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.535 [2024-07-26 22:37:00.945901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:08.535 [2024-07-26 22:37:01.037006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.535 [2024-07-26 22:37:01.037056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.535 [2024-07-26 22:37:01.037063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.101 22:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:09.102 22:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:09.102 22:37:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3419139 00:07:09.102 22:37:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3419139 /var/tmp/spdk2.sock 00:07:09.102 22:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:09.102 22:37:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:09.102 22:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3419139 /var/tmp/spdk2.sock 00:07:09.102 22:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:09.102 22:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.102 22:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:09.102 22:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.102 22:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3419139 /var/tmp/spdk2.sock 00:07:09.102 22:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3419139 ']' 00:07:09.102 22:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:09.102 22:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:09.102 22:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:09.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:09.102 22:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:09.102 22:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.102 [2024-07-26 22:37:01.349923] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:09.102 [2024-07-26 22:37:01.350005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3419139 ] 00:07:09.102 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.102 [2024-07-26 22:37:01.436360] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3419134 has claimed it. 00:07:09.102 [2024-07-26 22:37:01.436429] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:09.668 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3419139) - No such process 00:07:09.668 ERROR: process (pid: 3419139) is no longer running 00:07:09.668 22:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:09.668 22:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:07:09.668 22:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:09.668 22:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:09.668 22:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:09.668 22:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:09.668 22:37:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:09.668 22:37:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:09.668 22:37:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:09.668 22:37:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:09.668 22:37:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3419134 00:07:09.668 22:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 3419134 ']' 00:07:09.668 22:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 3419134 00:07:09.668 22:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:07:09.668 22:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:09.668 22:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3419134 00:07:09.668 22:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:09.668 22:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:09.668 22:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3419134' 00:07:09.668 killing process with pid 3419134 00:07:09.668 22:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 3419134 00:07:09.668 22:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 3419134 00:07:10.235 00:07:10.235 real 0m1.646s 00:07:10.235 user 0m4.469s 00:07:10.235 sys 0m0.455s 00:07:10.235 22:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:10.235 22:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:10.235 ************************************ 00:07:10.235 END TEST locking_overlapped_coremask 00:07:10.235 ************************************ 00:07:10.235 22:37:02 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:10.235 22:37:02 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:10.235 22:37:02 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:10.235 22:37:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.235 ************************************ 00:07:10.235 START TEST locking_overlapped_coremask_via_rpc 00:07:10.235 ************************************ 00:07:10.235 22:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:07:10.235 22:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3419303 00:07:10.235 22:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:10.235 22:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3419303 /var/tmp/spdk.sock 00:07:10.235 22:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3419303 ']' 00:07:10.235 22:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.235 22:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:10.235 22:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.235 22:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:10.235 22:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.235 [2024-07-26 22:37:02.590309] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:10.235 [2024-07-26 22:37:02.590406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3419303 ] 00:07:10.235 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.235 [2024-07-26 22:37:02.648456] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:10.235 [2024-07-26 22:37:02.648494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:10.494 [2024-07-26 22:37:02.738539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.494 [2024-07-26 22:37:02.738591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.494 [2024-07-26 22:37:02.738594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.494 22:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:10.494 22:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:10.494 22:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3419438 00:07:10.494 22:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3419438 /var/tmp/spdk2.sock 00:07:10.494 22:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:10.494 22:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3419438 ']' 00:07:10.494 22:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:10.494 22:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:10.494 22:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:10.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:10.494 22:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:10.494 22:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.752 [2024-07-26 22:37:03.034697] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:10.752 [2024-07-26 22:37:03.034777] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3419438 ] 00:07:10.752 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.752 [2024-07-26 22:37:03.121998] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:10.752 [2024-07-26 22:37:03.122031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:11.010 [2024-07-26 22:37:03.298034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:11.010 [2024-07-26 22:37:03.301155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:11.010 [2024-07-26 22:37:03.301158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.574 22:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:11.574 22:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:11.574 22:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:11.574 22:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.574 22:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.574 22:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.574 22:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:11.574 22:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:11.574 22:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:11.574 22:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:11.574 22:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.574 22:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:11.574 22:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.574 22:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:11.574 22:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.574 22:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.574 [2024-07-26 22:37:03.982167] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3419303 has claimed it. 00:07:11.574 request: 00:07:11.574 { 00:07:11.574 "method": "framework_enable_cpumask_locks", 00:07:11.574 "req_id": 1 00:07:11.574 } 00:07:11.574 Got JSON-RPC error response 00:07:11.574 response: 00:07:11.574 { 00:07:11.574 "code": -32603, 00:07:11.574 "message": "Failed to claim CPU core: 2" 00:07:11.574 } 00:07:11.574 22:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:11.574 22:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:11.574 22:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:11.574 22:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:11.574 22:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:11.574 22:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3419303 /var/tmp/spdk.sock 00:07:11.574 22:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3419303 ']' 00:07:11.574 22:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.574 22:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:11.574 22:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.574 22:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:11.574 22:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.831 22:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:11.831 22:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:11.831 22:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3419438 /var/tmp/spdk2.sock 00:07:11.831 22:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3419438 ']' 00:07:11.831 22:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:11.831 22:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:11.831 22:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:11.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:11.831 22:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:11.831 22:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.089 22:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:12.089 22:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:12.089 22:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:12.089 22:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:12.089 22:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:12.089 22:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:12.089 00:07:12.089 real 0m1.951s 00:07:12.089 user 0m1.009s 00:07:12.089 sys 0m0.184s 00:07:12.089 22:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:12.089 22:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.089 ************************************ 00:07:12.089 END TEST locking_overlapped_coremask_via_rpc 00:07:12.089 ************************************ 00:07:12.089 22:37:04 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:12.089 22:37:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3419303 ]] 00:07:12.089 22:37:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3419303 00:07:12.089 22:37:04 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3419303 ']' 00:07:12.089 22:37:04 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3419303 00:07:12.089 22:37:04 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:07:12.089 22:37:04 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:12.089 22:37:04 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3419303 00:07:12.089 22:37:04 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:12.089 22:37:04 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:12.089 22:37:04 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3419303' 00:07:12.089 killing process with pid 3419303 00:07:12.089 22:37:04 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3419303 00:07:12.089 22:37:04 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3419303 00:07:12.654 22:37:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3419438 ]] 00:07:12.654 22:37:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3419438 00:07:12.654 22:37:04 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3419438 ']' 00:07:12.654 22:37:04 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3419438 00:07:12.654 22:37:04 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:07:12.654 22:37:04 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:12.654 22:37:04 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3419438 00:07:12.654 22:37:04 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:07:12.654 22:37:04 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:07:12.654 22:37:04 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3419438' 00:07:12.654 killing process with pid 3419438 00:07:12.654 22:37:04 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3419438 00:07:12.654 22:37:04 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3419438 00:07:12.912 22:37:05 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:12.912 22:37:05 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:12.912 22:37:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3419303 ]] 00:07:12.912 22:37:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3419303 00:07:12.912 22:37:05 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3419303 ']' 00:07:12.912 22:37:05 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3419303 00:07:12.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3419303) - No such process 00:07:12.912 22:37:05 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3419303 is not found' 00:07:12.912 Process with pid 3419303 is not found 00:07:12.912 22:37:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3419438 ]] 00:07:12.912 22:37:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3419438 00:07:12.912 22:37:05 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3419438 ']' 00:07:12.912 22:37:05 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3419438 00:07:12.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3419438) - No such process 00:07:12.912 22:37:05 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3419438 is not found' 00:07:12.912 Process with pid 3419438 is not found 00:07:12.912 22:37:05 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:12.912 00:07:12.912 real 0m15.823s 00:07:12.912 user 0m27.494s 00:07:12.912 sys 0m5.379s 00:07:12.912 22:37:05 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:12.913 22:37:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:12.913 ************************************ 00:07:12.913 END TEST cpu_locks 00:07:12.913 ************************************ 00:07:12.913 00:07:12.913 real 0m41.597s 00:07:12.913 user 1m18.784s 00:07:12.913 sys 0m9.415s 00:07:12.913 22:37:05 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:12.913 22:37:05 event -- common/autotest_common.sh@10 -- # set +x 00:07:12.913 ************************************ 00:07:12.913 END TEST event 00:07:12.913 ************************************ 00:07:12.913 22:37:05 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:12.913 22:37:05 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:12.913 22:37:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:12.913 22:37:05 -- common/autotest_common.sh@10 -- # set +x 00:07:13.171 ************************************ 00:07:13.171 START TEST thread 00:07:13.171 ************************************ 00:07:13.171 22:37:05 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:13.171 * Looking for test storage... 00:07:13.171 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:13.171 22:37:05 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:13.171 22:37:05 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:13.171 22:37:05 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:13.171 22:37:05 thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.171 ************************************ 00:07:13.171 START TEST thread_poller_perf 00:07:13.171 ************************************ 00:07:13.171 22:37:05 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:13.171 [2024-07-26 22:37:05.520298] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:13.171 [2024-07-26 22:37:05.520376] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3419802 ] 00:07:13.171 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.171 [2024-07-26 22:37:05.578768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.171 [2024-07-26 22:37:05.666400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.171 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:14.544 ====================================== 00:07:14.544 busy:2709223488 (cyc) 00:07:14.544 total_run_count: 289000 00:07:14.544 tsc_hz: 2700000000 (cyc) 00:07:14.544 ====================================== 00:07:14.544 poller_cost: 9374 (cyc), 3471 (nsec) 00:07:14.544 00:07:14.544 real 0m1.248s 00:07:14.544 user 0m1.168s 00:07:14.544 sys 0m0.075s 00:07:14.544 22:37:06 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:14.544 22:37:06 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:14.544 ************************************ 00:07:14.544 END TEST thread_poller_perf 00:07:14.544 ************************************ 00:07:14.544 22:37:06 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:14.544 22:37:06 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:14.545 22:37:06 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:14.545 22:37:06 thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.545 ************************************ 00:07:14.545 START TEST thread_poller_perf 00:07:14.545 ************************************ 00:07:14.545 22:37:06 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:14.545 [2024-07-26 22:37:06.819953] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:14.545 [2024-07-26 22:37:06.820019] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3419960 ] 00:07:14.545 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.545 [2024-07-26 22:37:06.881567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.545 [2024-07-26 22:37:06.973985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.545 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:15.919 ====================================== 00:07:15.919 busy:2702970138 (cyc) 00:07:15.919 total_run_count: 3850000 00:07:15.919 tsc_hz: 2700000000 (cyc) 00:07:15.919 ====================================== 00:07:15.919 poller_cost: 702 (cyc), 260 (nsec) 00:07:15.919 00:07:15.919 real 0m1.250s 00:07:15.919 user 0m1.155s 00:07:15.919 sys 0m0.089s 00:07:15.919 22:37:08 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:15.919 22:37:08 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:15.920 ************************************ 00:07:15.920 END TEST thread_poller_perf 00:07:15.920 ************************************ 00:07:15.920 22:37:08 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:15.920 00:07:15.920 real 0m2.648s 00:07:15.920 user 0m2.387s 00:07:15.920 sys 0m0.261s 00:07:15.920 22:37:08 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:15.920 22:37:08 thread -- common/autotest_common.sh@10 -- # set +x 00:07:15.920 ************************************ 00:07:15.920 END TEST thread 00:07:15.920 ************************************ 00:07:15.920 22:37:08 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:15.920 22:37:08 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:15.920 22:37:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:15.920 22:37:08 -- common/autotest_common.sh@10 -- # set +x 00:07:15.920 ************************************ 00:07:15.920 START TEST accel 00:07:15.920 ************************************ 00:07:15.920 22:37:08 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:15.920 * Looking for test storage... 00:07:15.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:15.920 22:37:08 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:15.920 22:37:08 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:15.920 22:37:08 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:15.920 22:37:08 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3420153 00:07:15.920 22:37:08 accel -- accel/accel.sh@63 -- # waitforlisten 3420153 00:07:15.920 22:37:08 accel -- common/autotest_common.sh@827 -- # '[' -z 3420153 ']' 00:07:15.920 22:37:08 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:15.920 22:37:08 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.920 22:37:08 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:15.920 22:37:08 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:15.920 22:37:08 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.920 22:37:08 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.920 22:37:08 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:15.920 22:37:08 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.920 22:37:08 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.920 22:37:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.920 22:37:08 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.920 22:37:08 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.920 22:37:08 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:15.920 22:37:08 accel -- accel/accel.sh@41 -- # jq -r . 00:07:15.920 [2024-07-26 22:37:08.236314] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:15.920 [2024-07-26 22:37:08.236400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3420153 ] 00:07:15.920 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.920 [2024-07-26 22:37:08.301384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.920 [2024-07-26 22:37:08.394268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.179 22:37:08 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:16.179 22:37:08 accel -- common/autotest_common.sh@860 -- # return 0 00:07:16.179 22:37:08 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:16.179 22:37:08 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:16.179 22:37:08 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:16.179 22:37:08 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:16.179 22:37:08 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:16.179 22:37:08 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:16.179 22:37:08 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.179 22:37:08 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:16.179 22:37:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.179 22:37:08 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.438 22:37:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:16.438 22:37:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:16.438 22:37:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:16.438 22:37:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:16.438 22:37:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:16.438 22:37:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:16.438 22:37:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:16.438 22:37:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:16.438 22:37:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:16.438 22:37:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:16.438 22:37:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:16.438 22:37:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:16.438 22:37:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:16.438 22:37:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:16.438 22:37:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:16.438 22:37:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:16.438 22:37:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:16.438 22:37:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:16.438 22:37:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:16.438 22:37:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:16.438 22:37:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:16.438 22:37:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:16.438 22:37:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:16.438 22:37:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:16.438 22:37:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:16.438 22:37:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:16.438 22:37:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:16.438 22:37:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:16.438 22:37:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:16.438 22:37:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:16.438 22:37:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:16.438 22:37:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:16.438 22:37:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:16.438 22:37:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:16.438 22:37:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:16.438 22:37:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:16.438 22:37:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:16.438 22:37:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:16.438 22:37:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:16.438 22:37:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:16.438 22:37:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:16.438 22:37:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:16.438 22:37:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:16.438 22:37:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:16.438 22:37:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:16.438 22:37:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:16.438 22:37:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:16.438 22:37:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:16.438 22:37:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:16.438 22:37:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:16.438 22:37:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:16.438 22:37:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:16.438 22:37:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:16.438 22:37:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:16.438 22:37:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:16.438 22:37:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:16.438 22:37:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:16.438 22:37:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:16.438 22:37:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:16.438 22:37:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:16.438 22:37:08 accel -- accel/accel.sh@75 -- # killprocess 3420153 00:07:16.438 22:37:08 accel -- common/autotest_common.sh@946 -- # '[' -z 3420153 ']' 00:07:16.438 22:37:08 accel -- common/autotest_common.sh@950 -- # kill -0 3420153 00:07:16.438 22:37:08 accel -- common/autotest_common.sh@951 -- # uname 00:07:16.438 22:37:08 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:16.438 22:37:08 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3420153 00:07:16.438 22:37:08 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:16.438 22:37:08 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:16.438 22:37:08 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3420153' 00:07:16.438 killing process with pid 3420153 00:07:16.438 22:37:08 accel -- common/autotest_common.sh@965 -- # kill 3420153 00:07:16.438 22:37:08 accel -- common/autotest_common.sh@970 -- # wait 3420153 00:07:16.697 22:37:09 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:16.697 22:37:09 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:16.697 22:37:09 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:16.697 22:37:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:16.697 22:37:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.697 22:37:09 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:07:16.697 22:37:09 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:16.697 22:37:09 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:16.697 22:37:09 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.697 22:37:09 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.697 22:37:09 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.697 22:37:09 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.697 22:37:09 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.697 22:37:09 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:16.697 22:37:09 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:16.697 22:37:09 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:16.697 22:37:09 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:16.697 22:37:09 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:16.697 22:37:09 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:16.697 22:37:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:16.697 22:37:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.955 ************************************ 00:07:16.955 START TEST accel_missing_filename 00:07:16.955 ************************************ 00:07:16.955 22:37:09 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:07:16.955 22:37:09 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:16.955 22:37:09 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:16.955 22:37:09 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:16.955 22:37:09 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:16.956 22:37:09 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:16.956 22:37:09 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:16.956 22:37:09 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:16.956 22:37:09 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:16.956 22:37:09 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:16.956 22:37:09 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.956 22:37:09 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.956 22:37:09 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.956 22:37:09 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.956 22:37:09 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.956 22:37:09 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:16.956 22:37:09 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:16.956 [2024-07-26 22:37:09.223433] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:16.956 [2024-07-26 22:37:09.223497] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3420321 ] 00:07:16.956 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.956 [2024-07-26 22:37:09.284248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.956 [2024-07-26 22:37:09.376997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.956 [2024-07-26 22:37:09.438393] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:17.215 [2024-07-26 22:37:09.521797] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:17.215 A filename is required. 00:07:17.215 22:37:09 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:17.215 22:37:09 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:17.215 22:37:09 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:17.215 22:37:09 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:17.215 22:37:09 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:17.215 22:37:09 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:17.215 00:07:17.215 real 0m0.393s 00:07:17.215 user 0m0.282s 00:07:17.215 sys 0m0.144s 00:07:17.215 22:37:09 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:17.215 22:37:09 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:17.215 ************************************ 00:07:17.215 END TEST accel_missing_filename 00:07:17.215 ************************************ 00:07:17.215 22:37:09 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:17.215 22:37:09 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:07:17.215 22:37:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:17.215 22:37:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.215 ************************************ 00:07:17.215 START TEST accel_compress_verify 00:07:17.215 ************************************ 00:07:17.215 22:37:09 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:17.215 22:37:09 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:17.215 22:37:09 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:17.215 22:37:09 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:17.215 22:37:09 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.215 22:37:09 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:17.215 22:37:09 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.215 22:37:09 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:17.215 22:37:09 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:17.215 22:37:09 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:17.215 22:37:09 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.215 22:37:09 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.215 22:37:09 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.215 22:37:09 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.215 22:37:09 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.215 22:37:09 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:17.215 22:37:09 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:17.215 [2024-07-26 22:37:09.666735] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:17.215 [2024-07-26 22:37:09.666801] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3420370 ] 00:07:17.215 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.473 [2024-07-26 22:37:09.730794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.473 [2024-07-26 22:37:09.824332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.473 [2024-07-26 22:37:09.884365] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:17.473 [2024-07-26 22:37:09.970325] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:17.733 00:07:17.733 Compression does not support the verify option, aborting. 00:07:17.733 22:37:10 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:17.733 22:37:10 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:17.733 22:37:10 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:17.733 22:37:10 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:17.733 22:37:10 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:17.733 22:37:10 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:17.733 00:07:17.733 real 0m0.406s 00:07:17.733 user 0m0.294s 00:07:17.733 sys 0m0.145s 00:07:17.733 22:37:10 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:17.733 22:37:10 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:17.733 ************************************ 00:07:17.733 END TEST accel_compress_verify 00:07:17.733 ************************************ 00:07:17.733 22:37:10 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:17.733 22:37:10 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:17.733 22:37:10 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:17.733 22:37:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.733 ************************************ 00:07:17.733 START TEST accel_wrong_workload 00:07:17.733 ************************************ 00:07:17.733 22:37:10 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:07:17.733 22:37:10 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:17.733 22:37:10 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:17.733 22:37:10 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:17.733 22:37:10 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.733 22:37:10 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:17.733 22:37:10 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.733 22:37:10 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:17.733 22:37:10 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:17.733 22:37:10 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:17.733 22:37:10 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.733 22:37:10 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.733 22:37:10 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.733 22:37:10 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.733 22:37:10 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.733 22:37:10 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:17.733 22:37:10 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:17.733 Unsupported workload type: foobar 00:07:17.733 [2024-07-26 22:37:10.122740] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:17.733 accel_perf options: 00:07:17.733 [-h help message] 00:07:17.733 [-q queue depth per core] 00:07:17.733 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:17.733 [-T number of threads per core 00:07:17.733 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:17.733 [-t time in seconds] 00:07:17.733 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:17.733 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:17.733 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:17.733 [-l for compress/decompress workloads, name of uncompressed input file 00:07:17.733 [-S for crc32c workload, use this seed value (default 0) 00:07:17.733 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:17.733 [-f for fill workload, use this BYTE value (default 255) 00:07:17.733 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:17.733 [-y verify result if this switch is on] 00:07:17.733 [-a tasks to allocate per core (default: same value as -q)] 00:07:17.733 Can be used to spread operations across a wider range of memory. 00:07:17.733 22:37:10 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:17.733 22:37:10 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:17.733 22:37:10 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:17.733 22:37:10 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:17.733 00:07:17.733 real 0m0.024s 00:07:17.733 user 0m0.012s 00:07:17.733 sys 0m0.012s 00:07:17.733 22:37:10 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:17.733 22:37:10 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:17.733 ************************************ 00:07:17.733 END TEST accel_wrong_workload 00:07:17.733 ************************************ 00:07:17.733 Error: writing output failed: Broken pipe 00:07:17.733 22:37:10 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:17.733 22:37:10 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:07:17.733 22:37:10 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:17.733 22:37:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.733 ************************************ 00:07:17.733 START TEST accel_negative_buffers 00:07:17.733 ************************************ 00:07:17.733 22:37:10 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:17.733 22:37:10 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:17.733 22:37:10 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:17.733 22:37:10 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:17.733 22:37:10 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.733 22:37:10 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:17.733 22:37:10 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.733 22:37:10 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:17.733 22:37:10 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:17.733 22:37:10 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:17.733 22:37:10 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.733 22:37:10 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.733 22:37:10 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.733 22:37:10 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.733 22:37:10 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.733 22:37:10 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:17.733 22:37:10 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:17.733 -x option must be non-negative. 00:07:17.733 [2024-07-26 22:37:10.185733] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:17.733 accel_perf options: 00:07:17.733 [-h help message] 00:07:17.733 [-q queue depth per core] 00:07:17.733 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:17.733 [-T number of threads per core 00:07:17.733 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:17.733 [-t time in seconds] 00:07:17.733 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:17.733 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:17.733 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:17.733 [-l for compress/decompress workloads, name of uncompressed input file 00:07:17.733 [-S for crc32c workload, use this seed value (default 0) 00:07:17.733 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:17.733 [-f for fill workload, use this BYTE value (default 255) 00:07:17.733 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:17.733 [-y verify result if this switch is on] 00:07:17.733 [-a tasks to allocate per core (default: same value as -q)] 00:07:17.733 Can be used to spread operations across a wider range of memory. 00:07:17.733 22:37:10 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:17.733 22:37:10 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:17.733 22:37:10 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:17.733 22:37:10 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:17.733 00:07:17.733 real 0m0.021s 00:07:17.733 user 0m0.014s 00:07:17.733 sys 0m0.007s 00:07:17.733 22:37:10 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:17.733 22:37:10 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:17.733 ************************************ 00:07:17.733 END TEST accel_negative_buffers 00:07:17.733 ************************************ 00:07:17.733 Error: writing output failed: Broken pipe 00:07:17.733 22:37:10 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:17.733 22:37:10 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:17.733 22:37:10 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:17.733 22:37:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.993 ************************************ 00:07:17.993 START TEST accel_crc32c 00:07:17.993 ************************************ 00:07:17.993 22:37:10 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:17.993 [2024-07-26 22:37:10.255980] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:17.993 [2024-07-26 22:37:10.256047] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3420533 ] 00:07:17.993 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.993 [2024-07-26 22:37:10.319382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.993 [2024-07-26 22:37:10.412868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.993 22:37:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.367 22:37:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.367 22:37:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.367 22:37:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.367 22:37:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.367 22:37:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.367 22:37:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.367 22:37:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.367 22:37:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.367 22:37:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.367 22:37:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.367 22:37:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.367 22:37:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.367 22:37:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.367 22:37:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.367 22:37:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.367 22:37:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.367 22:37:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.367 22:37:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.367 22:37:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.367 22:37:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.367 22:37:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.367 22:37:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.368 22:37:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.368 22:37:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.368 22:37:11 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.368 22:37:11 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:19.368 22:37:11 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.368 00:07:19.368 real 0m1.410s 00:07:19.368 user 0m1.257s 00:07:19.368 sys 0m0.157s 00:07:19.368 22:37:11 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:19.368 22:37:11 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:19.368 ************************************ 00:07:19.368 END TEST accel_crc32c 00:07:19.368 ************************************ 00:07:19.368 22:37:11 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:19.368 22:37:11 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:19.368 22:37:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:19.368 22:37:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.368 ************************************ 00:07:19.368 START TEST accel_crc32c_C2 00:07:19.368 ************************************ 00:07:19.368 22:37:11 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:19.368 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:19.368 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:19.368 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.368 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:19.368 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.368 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:19.368 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.368 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.368 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.368 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.368 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.368 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.368 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:19.368 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:19.368 [2024-07-26 22:37:11.707530] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:19.368 [2024-07-26 22:37:11.707594] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3420686 ] 00:07:19.368 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.368 [2024-07-26 22:37:11.768375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.368 [2024-07-26 22:37:11.860842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.626 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.626 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.626 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.626 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.626 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.626 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.626 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.626 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.626 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:19.626 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.626 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.626 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.626 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.626 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.626 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.626 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.626 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.626 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.626 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.626 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.626 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.627 22:37:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.000 22:37:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.000 22:37:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.000 22:37:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.001 22:37:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.001 22:37:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.001 22:37:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.001 22:37:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.001 22:37:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.001 22:37:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.001 22:37:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.001 22:37:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.001 22:37:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.001 22:37:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.001 22:37:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.001 22:37:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.001 22:37:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.001 22:37:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.001 22:37:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.001 22:37:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.001 22:37:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.001 22:37:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.001 22:37:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.001 22:37:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.001 22:37:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.001 22:37:13 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:21.001 22:37:13 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:21.001 22:37:13 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.001 00:07:21.001 real 0m1.396s 00:07:21.001 user 0m1.260s 00:07:21.001 sys 0m0.138s 00:07:21.001 22:37:13 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:21.001 22:37:13 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:21.001 ************************************ 00:07:21.001 END TEST accel_crc32c_C2 00:07:21.001 ************************************ 00:07:21.001 22:37:13 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:21.001 22:37:13 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:21.001 22:37:13 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:21.001 22:37:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.001 ************************************ 00:07:21.001 START TEST accel_copy 00:07:21.001 ************************************ 00:07:21.001 22:37:13 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:21.001 [2024-07-26 22:37:13.148655] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:21.001 [2024-07-26 22:37:13.148720] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3420949 ] 00:07:21.001 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.001 [2024-07-26 22:37:13.212057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.001 [2024-07-26 22:37:13.305967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.001 22:37:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.390 22:37:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:22.390 22:37:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.390 22:37:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.390 22:37:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.390 22:37:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:22.390 22:37:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.390 22:37:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.390 22:37:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.390 22:37:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:22.390 22:37:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.390 22:37:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.390 22:37:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.390 22:37:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:22.390 22:37:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.390 22:37:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.390 22:37:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.390 22:37:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:22.390 22:37:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.390 22:37:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.390 22:37:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.390 22:37:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:22.390 22:37:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.390 22:37:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.390 22:37:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.390 22:37:14 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.390 22:37:14 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:22.390 22:37:14 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.390 00:07:22.390 real 0m1.405s 00:07:22.390 user 0m1.261s 00:07:22.390 sys 0m0.145s 00:07:22.390 22:37:14 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:22.390 22:37:14 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:22.390 ************************************ 00:07:22.390 END TEST accel_copy 00:07:22.390 ************************************ 00:07:22.390 22:37:14 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:22.390 22:37:14 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:22.390 22:37:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:22.390 22:37:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.390 ************************************ 00:07:22.390 START TEST accel_fill 00:07:22.390 ************************************ 00:07:22.390 22:37:14 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:22.390 22:37:14 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:22.390 22:37:14 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:22.390 22:37:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:22.390 22:37:14 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:22.390 22:37:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:22.390 22:37:14 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:22.390 22:37:14 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:22.390 22:37:14 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.390 22:37:14 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.390 22:37:14 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.390 22:37:14 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.390 22:37:14 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.390 22:37:14 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:22.390 22:37:14 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:22.390 [2024-07-26 22:37:14.595586] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:22.390 [2024-07-26 22:37:14.595648] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3421122 ] 00:07:22.390 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.390 [2024-07-26 22:37:14.656820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.390 [2024-07-26 22:37:14.749537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.390 22:37:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:22.390 22:37:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:22.390 22:37:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:22.390 22:37:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:22.390 22:37:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:22.390 22:37:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:22.390 22:37:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:22.390 22:37:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:22.390 22:37:14 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:22.390 22:37:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:22.390 22:37:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:22.390 22:37:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:22.390 22:37:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:22.390 22:37:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:22.390 22:37:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:22.390 22:37:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:22.390 22:37:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:22.390 22:37:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:22.390 22:37:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:22.390 22:37:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:22.391 22:37:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:23.768 22:37:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:23.768 22:37:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:23.768 22:37:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:23.768 22:37:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:23.768 22:37:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:23.768 22:37:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:23.768 22:37:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:23.768 22:37:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:23.768 22:37:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:23.768 22:37:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:23.768 22:37:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:23.768 22:37:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:23.768 22:37:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:23.768 22:37:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:23.768 22:37:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:23.768 22:37:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:23.768 22:37:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:23.768 22:37:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:23.768 22:37:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:23.768 22:37:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:23.768 22:37:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:23.768 22:37:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:23.768 22:37:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:23.768 22:37:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:23.768 22:37:15 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.768 22:37:15 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:23.768 22:37:15 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.768 00:07:23.768 real 0m1.403s 00:07:23.768 user 0m1.265s 00:07:23.768 sys 0m0.141s 00:07:23.768 22:37:15 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:23.768 22:37:15 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:23.768 ************************************ 00:07:23.768 END TEST accel_fill 00:07:23.768 ************************************ 00:07:23.768 22:37:16 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:23.768 22:37:16 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:23.768 22:37:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:23.768 22:37:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.768 ************************************ 00:07:23.768 START TEST accel_copy_crc32c 00:07:23.768 ************************************ 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:23.768 [2024-07-26 22:37:16.043029] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:23.768 [2024-07-26 22:37:16.043114] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3421281 ] 00:07:23.768 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.768 [2024-07-26 22:37:16.103629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.768 [2024-07-26 22:37:16.196229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.768 22:37:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:25.139 22:37:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:25.139 22:37:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:25.139 22:37:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:25.139 22:37:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:25.139 22:37:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:25.139 22:37:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:25.139 22:37:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:25.139 22:37:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:25.139 22:37:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:25.139 22:37:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:25.139 22:37:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:25.139 22:37:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:25.139 22:37:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:25.139 22:37:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:25.139 22:37:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:25.139 22:37:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:25.139 22:37:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:25.139 22:37:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:25.139 22:37:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:25.139 22:37:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:25.139 22:37:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:25.139 22:37:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:25.139 22:37:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:25.139 22:37:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:25.139 22:37:17 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.139 22:37:17 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:25.139 22:37:17 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.139 00:07:25.139 real 0m1.407s 00:07:25.139 user 0m1.259s 00:07:25.139 sys 0m0.151s 00:07:25.139 22:37:17 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:25.139 22:37:17 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:25.139 ************************************ 00:07:25.139 END TEST accel_copy_crc32c 00:07:25.139 ************************************ 00:07:25.139 22:37:17 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:25.139 22:37:17 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:25.139 22:37:17 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:25.139 22:37:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.139 ************************************ 00:07:25.139 START TEST accel_copy_crc32c_C2 00:07:25.139 ************************************ 00:07:25.139 22:37:17 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:25.139 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:25.139 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:25.139 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.139 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:25.139 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.139 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:25.139 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.139 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.139 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.139 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.139 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.139 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.139 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:25.139 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:25.139 [2024-07-26 22:37:17.503852] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:25.139 [2024-07-26 22:37:17.503913] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3421434 ] 00:07:25.139 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.139 [2024-07-26 22:37:17.566754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.397 [2024-07-26 22:37:17.663621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.397 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.397 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.397 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.397 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.397 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.397 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.397 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.397 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.397 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:25.397 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.397 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.397 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.397 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.397 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.397 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.397 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.397 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.397 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.397 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.397 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.397 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:25.397 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.397 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:25.397 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.397 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.397 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:25.397 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.397 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.397 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.397 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.397 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.397 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.397 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.398 22:37:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:26.770 22:37:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:26.770 22:37:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.770 22:37:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:26.770 22:37:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:26.770 22:37:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:26.770 22:37:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.770 22:37:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:26.770 22:37:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:26.770 22:37:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:26.770 22:37:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.770 22:37:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:26.770 22:37:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:26.770 22:37:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:26.770 22:37:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.770 22:37:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:26.770 22:37:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:26.770 22:37:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:26.770 22:37:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.770 22:37:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:26.770 22:37:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:26.770 22:37:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:26.770 22:37:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.770 22:37:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:26.770 22:37:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:26.770 22:37:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.770 22:37:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:26.770 22:37:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.770 00:07:26.770 real 0m1.408s 00:07:26.770 user 0m1.273s 00:07:26.770 sys 0m0.138s 00:07:26.770 22:37:18 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:26.770 22:37:18 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:26.770 ************************************ 00:07:26.770 END TEST accel_copy_crc32c_C2 00:07:26.770 ************************************ 00:07:26.770 22:37:18 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:26.770 22:37:18 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:26.770 22:37:18 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:26.770 22:37:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.770 ************************************ 00:07:26.770 START TEST accel_dualcast 00:07:26.770 ************************************ 00:07:26.770 22:37:18 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:07:26.770 22:37:18 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:26.770 22:37:18 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:26.770 22:37:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:26.770 22:37:18 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:26.770 22:37:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:26.770 22:37:18 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:26.770 22:37:18 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:26.770 22:37:18 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.770 22:37:18 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.770 22:37:18 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.770 22:37:18 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.770 22:37:18 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.770 22:37:18 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:26.771 22:37:18 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:26.771 [2024-07-26 22:37:18.955613] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:26.771 [2024-07-26 22:37:18.955678] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3421711 ] 00:07:26.771 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.771 [2024-07-26 22:37:19.015551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.771 [2024-07-26 22:37:19.105356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:26.771 22:37:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:28.144 22:37:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:28.144 22:37:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:28.144 22:37:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:28.144 22:37:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:28.144 22:37:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:28.144 22:37:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:28.144 22:37:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:28.144 22:37:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:28.144 22:37:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:28.144 22:37:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:28.144 22:37:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:28.144 22:37:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:28.144 22:37:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:28.144 22:37:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:28.144 22:37:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:28.144 22:37:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:28.144 22:37:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:28.144 22:37:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:28.144 22:37:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:28.144 22:37:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:28.144 22:37:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:28.144 22:37:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:28.144 22:37:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:28.144 22:37:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:28.144 22:37:20 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:28.144 22:37:20 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:28.144 22:37:20 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.144 00:07:28.144 real 0m1.397s 00:07:28.144 user 0m1.258s 00:07:28.144 sys 0m0.140s 00:07:28.144 22:37:20 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:28.144 22:37:20 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:28.144 ************************************ 00:07:28.144 END TEST accel_dualcast 00:07:28.145 ************************************ 00:07:28.145 22:37:20 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:28.145 22:37:20 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:28.145 22:37:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:28.145 22:37:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.145 ************************************ 00:07:28.145 START TEST accel_compare 00:07:28.145 ************************************ 00:07:28.145 22:37:20 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:28.145 [2024-07-26 22:37:20.394429] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:28.145 [2024-07-26 22:37:20.394487] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3421869 ] 00:07:28.145 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.145 [2024-07-26 22:37:20.455530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.145 [2024-07-26 22:37:20.548312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:28.145 22:37:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:29.519 22:37:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:29.519 22:37:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:29.519 22:37:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:29.519 22:37:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:29.519 22:37:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:29.519 22:37:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:29.519 22:37:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:29.519 22:37:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:29.519 22:37:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:29.519 22:37:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:29.519 22:37:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:29.519 22:37:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:29.519 22:37:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:29.519 22:37:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:29.519 22:37:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:29.519 22:37:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:29.519 22:37:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:29.519 22:37:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:29.519 22:37:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:29.519 22:37:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:29.519 22:37:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:29.519 22:37:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:29.519 22:37:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:29.520 22:37:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:29.520 22:37:21 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.520 22:37:21 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:29.520 22:37:21 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.520 00:07:29.520 real 0m1.408s 00:07:29.520 user 0m1.262s 00:07:29.520 sys 0m0.148s 00:07:29.520 22:37:21 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:29.520 22:37:21 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:29.520 ************************************ 00:07:29.520 END TEST accel_compare 00:07:29.520 ************************************ 00:07:29.520 22:37:21 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:29.520 22:37:21 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:29.520 22:37:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.520 22:37:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.520 ************************************ 00:07:29.520 START TEST accel_xor 00:07:29.520 ************************************ 00:07:29.520 22:37:21 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:07:29.520 22:37:21 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:29.520 22:37:21 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:29.520 22:37:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.520 22:37:21 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:29.520 22:37:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.520 22:37:21 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:29.520 22:37:21 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:29.520 22:37:21 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.520 22:37:21 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.520 22:37:21 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.520 22:37:21 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.520 22:37:21 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.520 22:37:21 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:29.520 22:37:21 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:29.520 [2024-07-26 22:37:21.845031] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:29.520 [2024-07-26 22:37:21.845156] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3422020 ] 00:07:29.520 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.520 [2024-07-26 22:37:21.905362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.520 [2024-07-26 22:37:21.998534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.778 22:37:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.778 22:37:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.778 22:37:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.778 22:37:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.778 22:37:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.778 22:37:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.778 22:37:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.778 22:37:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.778 22:37:22 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:29.778 22:37:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.778 22:37:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.778 22:37:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.778 22:37:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.778 22:37:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.778 22:37:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.778 22:37:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.778 22:37:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.778 22:37:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.778 22:37:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.778 22:37:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.778 22:37:22 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.779 22:37:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.153 00:07:31.153 real 0m1.404s 00:07:31.153 user 0m1.258s 00:07:31.153 sys 0m0.148s 00:07:31.153 22:37:23 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:31.153 22:37:23 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:31.153 ************************************ 00:07:31.153 END TEST accel_xor 00:07:31.153 ************************************ 00:07:31.153 22:37:23 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:31.153 22:37:23 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:31.153 22:37:23 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:31.153 22:37:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.153 ************************************ 00:07:31.153 START TEST accel_xor 00:07:31.153 ************************************ 00:07:31.153 22:37:23 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:31.153 [2024-07-26 22:37:23.289119] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:31.153 [2024-07-26 22:37:23.289178] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3422278 ] 00:07:31.153 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.153 [2024-07-26 22:37:23.350129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.153 [2024-07-26 22:37:23.443057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.153 22:37:23 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:31.154 22:37:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.154 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.154 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.154 22:37:23 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.154 22:37:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.154 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.154 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.154 22:37:23 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:31.154 22:37:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.154 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.154 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.154 22:37:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:31.154 22:37:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.154 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.154 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.154 22:37:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:31.154 22:37:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.154 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.154 22:37:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.528 22:37:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.528 22:37:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.528 22:37:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.528 22:37:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.528 22:37:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.528 22:37:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.528 22:37:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.528 22:37:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.528 22:37:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.528 22:37:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.528 22:37:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.528 22:37:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.528 22:37:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.528 22:37:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.528 22:37:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.528 22:37:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.528 22:37:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.528 22:37:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.528 22:37:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.528 22:37:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.528 22:37:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.528 22:37:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.528 22:37:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.528 22:37:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.528 22:37:24 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:32.528 22:37:24 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:32.528 22:37:24 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.528 00:07:32.528 real 0m1.399s 00:07:32.528 user 0m1.260s 00:07:32.528 sys 0m0.142s 00:07:32.528 22:37:24 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:32.528 22:37:24 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:32.528 ************************************ 00:07:32.528 END TEST accel_xor 00:07:32.528 ************************************ 00:07:32.528 22:37:24 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:32.528 22:37:24 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:32.528 22:37:24 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:32.528 22:37:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.528 ************************************ 00:07:32.528 START TEST accel_dif_verify 00:07:32.528 ************************************ 00:07:32.528 22:37:24 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:32.528 [2024-07-26 22:37:24.739789] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:32.528 [2024-07-26 22:37:24.739852] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3422468 ] 00:07:32.528 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.528 [2024-07-26 22:37:24.802967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.528 [2024-07-26 22:37:24.895492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:32.528 22:37:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:32.529 22:37:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:33.903 22:37:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:33.903 22:37:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:33.903 22:37:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:33.903 22:37:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:33.903 22:37:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:33.903 22:37:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:33.903 22:37:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:33.903 22:37:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:33.903 22:37:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:33.903 22:37:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:33.903 22:37:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:33.903 22:37:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:33.903 22:37:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:33.903 22:37:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:33.903 22:37:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:33.903 22:37:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:33.903 22:37:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:33.903 22:37:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:33.903 22:37:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:33.903 22:37:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:33.903 22:37:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:33.903 22:37:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:33.903 22:37:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:33.903 22:37:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:33.903 22:37:26 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:33.903 22:37:26 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:33.903 22:37:26 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.903 00:07:33.903 real 0m1.408s 00:07:33.903 user 0m1.268s 00:07:33.903 sys 0m0.144s 00:07:33.903 22:37:26 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:33.903 22:37:26 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:33.903 ************************************ 00:07:33.903 END TEST accel_dif_verify 00:07:33.903 ************************************ 00:07:33.903 22:37:26 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:33.903 22:37:26 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:33.903 22:37:26 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:33.903 22:37:26 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.903 ************************************ 00:07:33.903 START TEST accel_dif_generate 00:07:33.903 ************************************ 00:07:33.903 22:37:26 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:07:33.903 22:37:26 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:33.903 22:37:26 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:33.903 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.903 22:37:26 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:33.903 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.903 22:37:26 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:33.904 22:37:26 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:33.904 22:37:26 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.904 22:37:26 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.904 22:37:26 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.904 22:37:26 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.904 22:37:26 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.904 22:37:26 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:33.904 22:37:26 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:33.904 [2024-07-26 22:37:26.188125] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:33.904 [2024-07-26 22:37:26.188183] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3422628 ] 00:07:33.904 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.904 [2024-07-26 22:37:26.249141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.904 [2024-07-26 22:37:26.341755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.904 22:37:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:33.904 22:37:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:33.904 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.904 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.904 22:37:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:33.904 22:37:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:33.904 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.904 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.904 22:37:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:33.904 22:37:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:33.904 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.904 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.904 22:37:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:33.904 22:37:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:33.904 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.904 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.904 22:37:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:33.904 22:37:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:33.904 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.904 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.904 22:37:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:33.904 22:37:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:33.904 22:37:26 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:33.904 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.904 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:34.162 22:37:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:35.097 22:37:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:35.097 22:37:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:35.097 22:37:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:35.097 22:37:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:35.097 22:37:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:35.097 22:37:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:35.097 22:37:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:35.097 22:37:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:35.097 22:37:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:35.097 22:37:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:35.097 22:37:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:35.097 22:37:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:35.097 22:37:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:35.097 22:37:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:35.097 22:37:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:35.097 22:37:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:35.097 22:37:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:35.097 22:37:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:35.097 22:37:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:35.097 22:37:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:35.097 22:37:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:35.097 22:37:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:35.097 22:37:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:35.097 22:37:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:35.097 22:37:27 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:35.097 22:37:27 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:35.097 22:37:27 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.097 00:07:35.097 real 0m1.409s 00:07:35.097 user 0m1.270s 00:07:35.097 sys 0m0.144s 00:07:35.097 22:37:27 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:35.097 22:37:27 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:35.097 ************************************ 00:07:35.097 END TEST accel_dif_generate 00:07:35.097 ************************************ 00:07:35.356 22:37:27 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:35.356 22:37:27 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:35.356 22:37:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:35.356 22:37:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.356 ************************************ 00:07:35.356 START TEST accel_dif_generate_copy 00:07:35.356 ************************************ 00:07:35.356 22:37:27 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:07:35.356 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:35.356 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:35.356 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.356 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:35.356 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.356 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:35.356 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:35.356 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.356 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.356 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.356 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.356 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.356 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:35.356 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:35.356 [2024-07-26 22:37:27.650160] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:35.356 [2024-07-26 22:37:27.650224] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3422779 ] 00:07:35.356 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.356 [2024-07-26 22:37:27.713451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.356 [2024-07-26 22:37:27.804454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.614 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:35.615 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.615 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.615 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.615 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:35.615 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.615 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.615 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.615 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:35.615 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.615 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.615 22:37:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:36.549 22:37:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:36.549 22:37:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:36.549 22:37:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:36.549 22:37:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:36.549 22:37:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:36.549 22:37:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:36.549 22:37:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:36.549 22:37:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:36.549 22:37:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:36.549 22:37:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:36.549 22:37:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:36.549 22:37:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:36.549 22:37:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:36.549 22:37:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:36.549 22:37:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:36.549 22:37:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:36.549 22:37:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:36.549 22:37:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:36.549 22:37:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:36.549 22:37:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:36.549 22:37:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:36.549 22:37:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:36.549 22:37:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:36.549 22:37:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:36.549 22:37:29 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:36.549 22:37:29 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:36.549 22:37:29 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.549 00:07:36.549 real 0m1.398s 00:07:36.549 user 0m1.252s 00:07:36.549 sys 0m0.148s 00:07:36.549 22:37:29 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:36.549 22:37:29 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:36.549 ************************************ 00:07:36.549 END TEST accel_dif_generate_copy 00:07:36.549 ************************************ 00:07:36.549 22:37:29 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:36.549 22:37:29 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:36.549 22:37:29 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:36.549 22:37:29 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:36.549 22:37:29 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.808 ************************************ 00:07:36.808 START TEST accel_comp 00:07:36.808 ************************************ 00:07:36.808 22:37:29 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:36.808 22:37:29 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:36.808 22:37:29 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:36.808 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:36.808 22:37:29 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:36.808 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:36.808 22:37:29 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:36.808 22:37:29 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:36.808 22:37:29 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.808 22:37:29 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.808 22:37:29 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.808 22:37:29 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.808 22:37:29 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.808 22:37:29 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:36.808 22:37:29 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:36.808 [2024-07-26 22:37:29.095111] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:36.808 [2024-07-26 22:37:29.095185] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3423058 ] 00:07:36.808 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.808 [2024-07-26 22:37:29.158835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.808 [2024-07-26 22:37:29.252029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.066 22:37:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:37.066 22:37:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.066 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:37.066 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:37.066 22:37:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:37.066 22:37:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.066 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:37.066 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:37.066 22:37:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:37.066 22:37:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.066 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:37.066 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:37.066 22:37:29 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:37.066 22:37:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.066 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:37.066 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:37.066 22:37:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:37.066 22:37:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.066 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:37.066 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:37.066 22:37:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:37.066 22:37:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.066 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:37.066 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:37.066 22:37:29 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:37.066 22:37:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.066 22:37:29 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:37.066 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:37.066 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:37.066 22:37:29 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:37.067 22:37:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:38.002 22:37:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:38.002 22:37:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.002 22:37:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:38.002 22:37:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:38.002 22:37:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:38.002 22:37:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.002 22:37:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:38.002 22:37:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:38.002 22:37:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:38.002 22:37:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.002 22:37:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:38.002 22:37:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:38.002 22:37:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:38.002 22:37:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.002 22:37:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:38.002 22:37:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:38.002 22:37:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:38.002 22:37:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.002 22:37:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:38.002 22:37:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:38.002 22:37:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:38.002 22:37:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.002 22:37:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:38.002 22:37:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:38.002 22:37:30 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:38.002 22:37:30 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:38.002 22:37:30 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.002 00:07:38.002 real 0m1.404s 00:07:38.002 user 0m1.258s 00:07:38.002 sys 0m0.150s 00:07:38.002 22:37:30 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:38.002 22:37:30 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:38.002 ************************************ 00:07:38.002 END TEST accel_comp 00:07:38.002 ************************************ 00:07:38.002 22:37:30 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:38.002 22:37:30 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:38.002 22:37:30 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:38.002 22:37:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.261 ************************************ 00:07:38.261 START TEST accel_decomp 00:07:38.261 ************************************ 00:07:38.261 22:37:30 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:38.261 22:37:30 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:38.261 22:37:30 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:38.261 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.261 22:37:30 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:38.261 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.261 22:37:30 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:38.261 22:37:30 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:38.261 22:37:30 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.261 22:37:30 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.261 22:37:30 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.261 22:37:30 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.261 22:37:30 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.261 22:37:30 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:38.261 22:37:30 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:38.261 [2024-07-26 22:37:30.542834] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:38.261 [2024-07-26 22:37:30.542897] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3423216 ] 00:07:38.261 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.261 [2024-07-26 22:37:30.598245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.261 [2024-07-26 22:37:30.682628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.261 22:37:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:38.261 22:37:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.261 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.261 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.261 22:37:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:38.261 22:37:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.261 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.261 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.262 22:37:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:39.668 22:37:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:39.668 22:37:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.668 22:37:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:39.668 22:37:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:39.668 22:37:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:39.668 22:37:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.668 22:37:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:39.668 22:37:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:39.668 22:37:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:39.668 22:37:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.668 22:37:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:39.668 22:37:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:39.668 22:37:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:39.668 22:37:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.668 22:37:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:39.668 22:37:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:39.668 22:37:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:39.668 22:37:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.668 22:37:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:39.668 22:37:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:39.668 22:37:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:39.668 22:37:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.668 22:37:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:39.668 22:37:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:39.668 22:37:31 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:39.668 22:37:31 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:39.668 22:37:31 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.668 00:07:39.668 real 0m1.399s 00:07:39.668 user 0m1.259s 00:07:39.668 sys 0m0.144s 00:07:39.668 22:37:31 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:39.668 22:37:31 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:39.668 ************************************ 00:07:39.668 END TEST accel_decomp 00:07:39.668 ************************************ 00:07:39.668 22:37:31 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:39.668 22:37:31 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:39.668 22:37:31 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:39.668 22:37:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.668 ************************************ 00:07:39.668 START TEST accel_decmop_full 00:07:39.668 ************************************ 00:07:39.668 22:37:31 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:39.668 22:37:31 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:07:39.668 22:37:31 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:07:39.668 22:37:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:39.668 22:37:31 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:39.668 22:37:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:39.668 22:37:31 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:39.668 22:37:31 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:07:39.668 22:37:31 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.668 22:37:31 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.668 22:37:31 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.668 22:37:31 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.668 22:37:31 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.668 22:37:31 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:07:39.668 22:37:31 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:07:39.668 [2024-07-26 22:37:31.985372] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:39.668 [2024-07-26 22:37:31.985439] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3423369 ] 00:07:39.668 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.668 [2024-07-26 22:37:32.045140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.668 [2024-07-26 22:37:32.141087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:39.927 22:37:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:39.928 22:37:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:39.928 22:37:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:39.928 22:37:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:39.928 22:37:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:39.928 22:37:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:39.928 22:37:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:39.928 22:37:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:39.928 22:37:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:39.928 22:37:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:39.928 22:37:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:07:39.928 22:37:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:39.928 22:37:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:39.928 22:37:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:39.928 22:37:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:39.928 22:37:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:39.928 22:37:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:39.928 22:37:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:39.928 22:37:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:07:39.928 22:37:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:39.928 22:37:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:39.928 22:37:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:39.928 22:37:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:39.928 22:37:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:39.928 22:37:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:39.928 22:37:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:39.928 22:37:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:39.928 22:37:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:39.928 22:37:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:39.928 22:37:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:41.303 22:37:33 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:41.303 22:37:33 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:41.303 22:37:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:41.303 22:37:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:41.303 22:37:33 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:41.303 22:37:33 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:41.303 22:37:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:41.303 22:37:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:41.303 22:37:33 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:41.303 22:37:33 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:41.303 22:37:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:41.303 22:37:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:41.303 22:37:33 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:41.303 22:37:33 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:41.303 22:37:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:41.303 22:37:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:41.303 22:37:33 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:41.303 22:37:33 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:41.303 22:37:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:41.303 22:37:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:41.303 22:37:33 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:41.303 22:37:33 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:41.303 22:37:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:41.303 22:37:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:41.303 22:37:33 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:41.303 22:37:33 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:41.303 22:37:33 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.303 00:07:41.303 real 0m1.421s 00:07:41.303 user 0m1.282s 00:07:41.303 sys 0m0.142s 00:07:41.303 22:37:33 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:41.303 22:37:33 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:07:41.303 ************************************ 00:07:41.303 END TEST accel_decmop_full 00:07:41.303 ************************************ 00:07:41.303 22:37:33 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:41.303 22:37:33 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:41.303 22:37:33 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:41.303 22:37:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.303 ************************************ 00:07:41.303 START TEST accel_decomp_mcore 00:07:41.303 ************************************ 00:07:41.303 22:37:33 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:41.303 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:41.303 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:41.303 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.303 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:41.303 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.303 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:41.303 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:41.303 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.303 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.303 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.303 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.303 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.303 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:41.303 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:41.303 [2024-07-26 22:37:33.454816] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:41.303 [2024-07-26 22:37:33.454879] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3423569 ] 00:07:41.303 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.303 [2024-07-26 22:37:33.517596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:41.303 [2024-07-26 22:37:33.612465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.303 [2024-07-26 22:37:33.612539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.303 [2024-07-26 22:37:33.612630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.303 [2024-07-26 22:37:33.612632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.303 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:41.303 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.303 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.303 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.303 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.304 22:37:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.677 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:42.677 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.677 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.677 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.677 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:42.677 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.677 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.677 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.677 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:42.677 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.678 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.678 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.678 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:42.678 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.678 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.678 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.678 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:42.678 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.678 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.678 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.678 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:42.678 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.678 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.678 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.678 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:42.678 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.678 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.678 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.678 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:42.678 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.678 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.678 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.678 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:42.678 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.678 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.678 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.678 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:42.678 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:42.678 22:37:34 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:42.678 00:07:42.678 real 0m1.409s 00:07:42.678 user 0m4.689s 00:07:42.678 sys 0m0.153s 00:07:42.678 22:37:34 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:42.678 22:37:34 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:42.678 ************************************ 00:07:42.678 END TEST accel_decomp_mcore 00:07:42.678 ************************************ 00:07:42.678 22:37:34 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:42.678 22:37:34 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:42.678 22:37:34 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:42.678 22:37:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:42.678 ************************************ 00:07:42.678 START TEST accel_decomp_full_mcore 00:07:42.678 ************************************ 00:07:42.678 22:37:34 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:42.678 22:37:34 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:42.678 22:37:34 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:42.678 22:37:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.678 22:37:34 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:42.678 22:37:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.678 22:37:34 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:42.678 22:37:34 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:42.678 22:37:34 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:42.678 22:37:34 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:42.678 22:37:34 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.678 22:37:34 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.678 22:37:34 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:42.678 22:37:34 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:42.678 22:37:34 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:42.678 [2024-07-26 22:37:34.910235] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:42.678 [2024-07-26 22:37:34.910305] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3423805 ] 00:07:42.678 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.678 [2024-07-26 22:37:34.972916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:42.678 [2024-07-26 22:37:35.066413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.678 [2024-07-26 22:37:35.066467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.678 [2024-07-26 22:37:35.066583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:42.678 [2024-07-26 22:37:35.066586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.678 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.679 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.679 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:42.679 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.679 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.679 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.679 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:42.679 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.679 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.679 22:37:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.052 00:07:44.052 real 0m1.409s 00:07:44.052 user 0m4.715s 00:07:44.052 sys 0m0.138s 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:44.052 22:37:36 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:44.052 ************************************ 00:07:44.052 END TEST accel_decomp_full_mcore 00:07:44.052 ************************************ 00:07:44.052 22:37:36 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:44.052 22:37:36 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:44.052 22:37:36 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:44.052 22:37:36 accel -- common/autotest_common.sh@10 -- # set +x 00:07:44.052 ************************************ 00:07:44.052 START TEST accel_decomp_mthread 00:07:44.052 ************************************ 00:07:44.052 22:37:36 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:44.052 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:44.052 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:44.052 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.052 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:44.052 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.052 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:44.052 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:44.052 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.052 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:44.052 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.052 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.052 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:44.052 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:44.052 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:44.052 [2024-07-26 22:37:36.361228] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:44.052 [2024-07-26 22:37:36.361286] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3423970 ] 00:07:44.052 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.052 [2024-07-26 22:37:36.422212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.052 [2024-07-26 22:37:36.515028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.311 22:37:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:45.684 22:37:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:45.684 22:37:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:45.684 22:37:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:45.684 22:37:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:45.684 22:37:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:45.684 22:37:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:45.684 22:37:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:45.684 22:37:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:45.684 22:37:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:45.684 22:37:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:45.684 22:37:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:45.684 22:37:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:45.684 22:37:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:45.684 22:37:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:45.684 22:37:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:45.684 22:37:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:45.684 22:37:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:45.684 22:37:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:45.684 22:37:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:45.684 22:37:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:45.684 22:37:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:45.684 22:37:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:45.684 22:37:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:45.684 22:37:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:45.684 22:37:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:45.684 22:37:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:45.684 22:37:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:45.684 22:37:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:45.684 22:37:37 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:45.684 22:37:37 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:45.684 22:37:37 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.684 00:07:45.684 real 0m1.418s 00:07:45.684 user 0m1.272s 00:07:45.684 sys 0m0.149s 00:07:45.684 22:37:37 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:45.684 22:37:37 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:45.684 ************************************ 00:07:45.684 END TEST accel_decomp_mthread 00:07:45.684 ************************************ 00:07:45.684 22:37:37 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:45.684 22:37:37 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:45.684 22:37:37 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:45.684 22:37:37 accel -- common/autotest_common.sh@10 -- # set +x 00:07:45.684 ************************************ 00:07:45.684 START TEST accel_decomp_full_mthread 00:07:45.684 ************************************ 00:07:45.684 22:37:37 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:45.684 22:37:37 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:45.684 22:37:37 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:45.684 22:37:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:45.684 22:37:37 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:45.684 22:37:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:45.684 22:37:37 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:45.684 22:37:37 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:45.684 22:37:37 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:45.684 22:37:37 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:45.684 22:37:37 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.684 22:37:37 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.684 22:37:37 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:45.684 22:37:37 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:45.684 22:37:37 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:45.684 [2024-07-26 22:37:37.820762] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:45.684 [2024-07-26 22:37:37.820816] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3424123 ] 00:07:45.684 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.684 [2024-07-26 22:37:37.880457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.684 [2024-07-26 22:37:37.975575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.684 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:45.684 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:45.684 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:45.684 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:45.684 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:45.684 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:45.684 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:45.684 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:45.684 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:45.684 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:45.684 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:45.684 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:45.684 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:45.684 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:45.684 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:45.684 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:45.684 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:45.684 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:45.684 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:45.684 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:45.684 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:45.684 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:45.684 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:45.684 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:45.684 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:45.684 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:45.685 22:37:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.058 22:37:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:47.058 22:37:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.058 22:37:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.058 22:37:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.058 22:37:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:47.058 22:37:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.058 22:37:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.058 22:37:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.058 22:37:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:47.058 22:37:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.058 22:37:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.058 22:37:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.058 22:37:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:47.058 22:37:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.059 22:37:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.059 22:37:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.059 22:37:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:47.059 22:37:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.059 22:37:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.059 22:37:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.059 22:37:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:47.059 22:37:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.059 22:37:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.059 22:37:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.059 22:37:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:47.059 22:37:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.059 22:37:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.059 22:37:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.059 22:37:39 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:47.059 22:37:39 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:47.059 22:37:39 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.059 00:07:47.059 real 0m1.439s 00:07:47.059 user 0m1.303s 00:07:47.059 sys 0m0.139s 00:07:47.059 22:37:39 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:47.059 22:37:39 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:47.059 ************************************ 00:07:47.059 END TEST accel_decomp_full_mthread 00:07:47.059 ************************************ 00:07:47.059 22:37:39 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:47.059 22:37:39 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:47.059 22:37:39 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:47.059 22:37:39 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:47.059 22:37:39 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.059 22:37:39 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:47.059 22:37:39 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.059 22:37:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.059 22:37:39 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.059 22:37:39 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.059 22:37:39 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.059 22:37:39 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:47.059 22:37:39 accel -- accel/accel.sh@41 -- # jq -r . 00:07:47.059 ************************************ 00:07:47.059 START TEST accel_dif_functional_tests 00:07:47.059 ************************************ 00:07:47.059 22:37:39 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:47.059 [2024-07-26 22:37:39.327504] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:47.059 [2024-07-26 22:37:39.327564] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3424398 ] 00:07:47.059 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.059 [2024-07-26 22:37:39.387251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:47.059 [2024-07-26 22:37:39.482148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.059 [2024-07-26 22:37:39.482203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.059 [2024-07-26 22:37:39.482221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.318 00:07:47.318 00:07:47.318 CUnit - A unit testing framework for C - Version 2.1-3 00:07:47.318 http://cunit.sourceforge.net/ 00:07:47.318 00:07:47.318 00:07:47.318 Suite: accel_dif 00:07:47.318 Test: verify: DIF generated, GUARD check ...passed 00:07:47.318 Test: verify: DIF generated, APPTAG check ...passed 00:07:47.318 Test: verify: DIF generated, REFTAG check ...passed 00:07:47.318 Test: verify: DIF not generated, GUARD check ...[2024-07-26 22:37:39.574843] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:47.318 passed 00:07:47.318 Test: verify: DIF not generated, APPTAG check ...[2024-07-26 22:37:39.574905] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:47.318 passed 00:07:47.318 Test: verify: DIF not generated, REFTAG check ...[2024-07-26 22:37:39.574938] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:47.318 passed 00:07:47.318 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:47.318 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-26 22:37:39.574998] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:47.318 passed 00:07:47.318 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:47.318 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:47.318 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:47.318 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-26 22:37:39.575164] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:47.318 passed 00:07:47.318 Test: verify copy: DIF generated, GUARD check ...passed 00:07:47.318 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:47.318 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:47.318 Test: verify copy: DIF not generated, GUARD check ...[2024-07-26 22:37:39.575321] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:47.318 passed 00:07:47.318 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-26 22:37:39.575374] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:47.318 passed 00:07:47.318 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-26 22:37:39.575422] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:47.318 passed 00:07:47.318 Test: generate copy: DIF generated, GUARD check ...passed 00:07:47.318 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:47.318 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:47.318 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:47.318 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:47.318 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:47.318 Test: generate copy: iovecs-len validate ...[2024-07-26 22:37:39.575635] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:47.318 passed 00:07:47.318 Test: generate copy: buffer alignment validate ...passed 00:07:47.318 00:07:47.318 Run Summary: Type Total Ran Passed Failed Inactive 00:07:47.318 suites 1 1 n/a 0 0 00:07:47.318 tests 26 26 26 0 0 00:07:47.318 asserts 115 115 115 0 n/a 00:07:47.318 00:07:47.318 Elapsed time = 0.002 seconds 00:07:47.318 00:07:47.318 real 0m0.484s 00:07:47.318 user 0m0.731s 00:07:47.318 sys 0m0.179s 00:07:47.318 22:37:39 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:47.318 22:37:39 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:47.318 ************************************ 00:07:47.318 END TEST accel_dif_functional_tests 00:07:47.318 ************************************ 00:07:47.318 00:07:47.318 real 0m31.667s 00:07:47.318 user 0m34.997s 00:07:47.318 sys 0m4.550s 00:07:47.318 22:37:39 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:47.318 22:37:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.318 ************************************ 00:07:47.318 END TEST accel 00:07:47.318 ************************************ 00:07:47.576 22:37:39 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:47.576 22:37:39 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:47.576 22:37:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:47.576 22:37:39 -- common/autotest_common.sh@10 -- # set +x 00:07:47.576 ************************************ 00:07:47.576 START TEST accel_rpc 00:07:47.576 ************************************ 00:07:47.576 22:37:39 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:47.576 * Looking for test storage... 00:07:47.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:47.576 22:37:39 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:47.576 22:37:39 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3424467 00:07:47.576 22:37:39 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:47.576 22:37:39 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3424467 00:07:47.576 22:37:39 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 3424467 ']' 00:07:47.576 22:37:39 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.576 22:37:39 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:47.576 22:37:39 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.576 22:37:39 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:47.576 22:37:39 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.576 [2024-07-26 22:37:39.942860] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:47.576 [2024-07-26 22:37:39.942954] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3424467 ] 00:07:47.576 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.576 [2024-07-26 22:37:40.005108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.835 [2024-07-26 22:37:40.098689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.835 22:37:40 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:47.835 22:37:40 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:47.835 22:37:40 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:47.835 22:37:40 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:47.835 22:37:40 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:47.835 22:37:40 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:47.835 22:37:40 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:47.835 22:37:40 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:47.835 22:37:40 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:47.835 22:37:40 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.835 ************************************ 00:07:47.835 START TEST accel_assign_opcode 00:07:47.835 ************************************ 00:07:47.835 22:37:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:47.835 22:37:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:47.835 22:37:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.835 22:37:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:47.835 [2024-07-26 22:37:40.175404] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:47.835 22:37:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.835 22:37:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:47.835 22:37:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.835 22:37:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:47.835 [2024-07-26 22:37:40.183407] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:47.835 22:37:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.835 22:37:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:47.835 22:37:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.835 22:37:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:48.093 22:37:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.093 22:37:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:48.093 22:37:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.093 22:37:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:48.093 22:37:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:48.093 22:37:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:48.093 22:37:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.093 software 00:07:48.093 00:07:48.093 real 0m0.289s 00:07:48.093 user 0m0.038s 00:07:48.093 sys 0m0.005s 00:07:48.093 22:37:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:48.093 22:37:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:48.093 ************************************ 00:07:48.093 END TEST accel_assign_opcode 00:07:48.093 ************************************ 00:07:48.093 22:37:40 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3424467 00:07:48.093 22:37:40 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 3424467 ']' 00:07:48.093 22:37:40 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 3424467 00:07:48.093 22:37:40 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:07:48.093 22:37:40 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:48.093 22:37:40 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3424467 00:07:48.093 22:37:40 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:48.093 22:37:40 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:48.093 22:37:40 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3424467' 00:07:48.093 killing process with pid 3424467 00:07:48.093 22:37:40 accel_rpc -- common/autotest_common.sh@965 -- # kill 3424467 00:07:48.093 22:37:40 accel_rpc -- common/autotest_common.sh@970 -- # wait 3424467 00:07:48.696 00:07:48.696 real 0m1.063s 00:07:48.696 user 0m1.000s 00:07:48.696 sys 0m0.409s 00:07:48.696 22:37:40 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:48.696 22:37:40 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.696 ************************************ 00:07:48.696 END TEST accel_rpc 00:07:48.696 ************************************ 00:07:48.696 22:37:40 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:48.696 22:37:40 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:48.696 22:37:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:48.696 22:37:40 -- common/autotest_common.sh@10 -- # set +x 00:07:48.696 ************************************ 00:07:48.696 START TEST app_cmdline 00:07:48.696 ************************************ 00:07:48.696 22:37:40 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:48.696 * Looking for test storage... 00:07:48.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:48.696 22:37:41 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:48.696 22:37:41 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3424673 00:07:48.696 22:37:41 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:48.696 22:37:41 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3424673 00:07:48.696 22:37:41 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 3424673 ']' 00:07:48.696 22:37:41 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.696 22:37:41 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:48.696 22:37:41 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.696 22:37:41 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:48.696 22:37:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:48.696 [2024-07-26 22:37:41.060560] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:48.696 [2024-07-26 22:37:41.060639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3424673 ] 00:07:48.696 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.696 [2024-07-26 22:37:41.116248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.954 [2024-07-26 22:37:41.202933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.954 22:37:41 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:48.954 22:37:41 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:07:48.954 22:37:41 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:49.212 { 00:07:49.212 "version": "SPDK v24.05.1-pre git sha1 241d0f3c9", 00:07:49.212 "fields": { 00:07:49.212 "major": 24, 00:07:49.212 "minor": 5, 00:07:49.212 "patch": 1, 00:07:49.212 "suffix": "-pre", 00:07:49.212 "commit": "241d0f3c9" 00:07:49.212 } 00:07:49.212 } 00:07:49.212 22:37:41 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:49.212 22:37:41 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:49.212 22:37:41 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:49.212 22:37:41 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:49.212 22:37:41 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:49.212 22:37:41 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.212 22:37:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:49.213 22:37:41 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:49.213 22:37:41 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:49.213 22:37:41 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.470 22:37:41 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:49.470 22:37:41 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:49.470 22:37:41 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:49.470 22:37:41 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:49.470 22:37:41 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:49.470 22:37:41 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:49.470 22:37:41 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:49.470 22:37:41 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:49.470 22:37:41 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:49.470 22:37:41 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:49.470 22:37:41 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:49.470 22:37:41 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:49.470 22:37:41 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:49.470 22:37:41 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:49.470 request: 00:07:49.470 { 00:07:49.470 "method": "env_dpdk_get_mem_stats", 00:07:49.470 "req_id": 1 00:07:49.470 } 00:07:49.471 Got JSON-RPC error response 00:07:49.471 response: 00:07:49.471 { 00:07:49.471 "code": -32601, 00:07:49.471 "message": "Method not found" 00:07:49.471 } 00:07:49.729 22:37:41 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:49.729 22:37:41 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:49.729 22:37:41 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:49.729 22:37:41 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:49.729 22:37:41 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3424673 00:07:49.729 22:37:41 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 3424673 ']' 00:07:49.729 22:37:41 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 3424673 00:07:49.729 22:37:41 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:07:49.729 22:37:41 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:49.729 22:37:41 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3424673 00:07:49.729 22:37:42 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:49.729 22:37:42 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:49.729 22:37:42 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3424673' 00:07:49.729 killing process with pid 3424673 00:07:49.729 22:37:42 app_cmdline -- common/autotest_common.sh@965 -- # kill 3424673 00:07:49.729 22:37:42 app_cmdline -- common/autotest_common.sh@970 -- # wait 3424673 00:07:49.988 00:07:49.988 real 0m1.482s 00:07:49.988 user 0m1.798s 00:07:49.988 sys 0m0.457s 00:07:49.988 22:37:42 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:49.988 22:37:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:49.988 ************************************ 00:07:49.988 END TEST app_cmdline 00:07:49.988 ************************************ 00:07:49.988 22:37:42 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:49.988 22:37:42 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:49.988 22:37:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:49.988 22:37:42 -- common/autotest_common.sh@10 -- # set +x 00:07:50.247 ************************************ 00:07:50.247 START TEST version 00:07:50.247 ************************************ 00:07:50.247 22:37:42 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:50.247 * Looking for test storage... 00:07:50.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:50.247 22:37:42 version -- app/version.sh@17 -- # get_header_version major 00:07:50.247 22:37:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:50.247 22:37:42 version -- app/version.sh@14 -- # cut -f2 00:07:50.247 22:37:42 version -- app/version.sh@14 -- # tr -d '"' 00:07:50.247 22:37:42 version -- app/version.sh@17 -- # major=24 00:07:50.247 22:37:42 version -- app/version.sh@18 -- # get_header_version minor 00:07:50.247 22:37:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:50.247 22:37:42 version -- app/version.sh@14 -- # cut -f2 00:07:50.247 22:37:42 version -- app/version.sh@14 -- # tr -d '"' 00:07:50.247 22:37:42 version -- app/version.sh@18 -- # minor=5 00:07:50.247 22:37:42 version -- app/version.sh@19 -- # get_header_version patch 00:07:50.247 22:37:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:50.247 22:37:42 version -- app/version.sh@14 -- # cut -f2 00:07:50.247 22:37:42 version -- app/version.sh@14 -- # tr -d '"' 00:07:50.247 22:37:42 version -- app/version.sh@19 -- # patch=1 00:07:50.247 22:37:42 version -- app/version.sh@20 -- # get_header_version suffix 00:07:50.247 22:37:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:50.247 22:37:42 version -- app/version.sh@14 -- # cut -f2 00:07:50.247 22:37:42 version -- app/version.sh@14 -- # tr -d '"' 00:07:50.247 22:37:42 version -- app/version.sh@20 -- # suffix=-pre 00:07:50.247 22:37:42 version -- app/version.sh@22 -- # version=24.5 00:07:50.247 22:37:42 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:50.247 22:37:42 version -- app/version.sh@25 -- # version=24.5.1 00:07:50.247 22:37:42 version -- app/version.sh@28 -- # version=24.5.1rc0 00:07:50.247 22:37:42 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:50.247 22:37:42 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:50.247 22:37:42 version -- app/version.sh@30 -- # py_version=24.5.1rc0 00:07:50.247 22:37:42 version -- app/version.sh@31 -- # [[ 24.5.1rc0 == \2\4\.\5\.\1\r\c\0 ]] 00:07:50.247 00:07:50.247 real 0m0.108s 00:07:50.247 user 0m0.054s 00:07:50.247 sys 0m0.076s 00:07:50.247 22:37:42 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:50.247 22:37:42 version -- common/autotest_common.sh@10 -- # set +x 00:07:50.247 ************************************ 00:07:50.247 END TEST version 00:07:50.247 ************************************ 00:07:50.247 22:37:42 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:50.247 22:37:42 -- spdk/autotest.sh@198 -- # uname -s 00:07:50.247 22:37:42 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:50.247 22:37:42 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:50.247 22:37:42 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:50.247 22:37:42 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:50.247 22:37:42 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:50.247 22:37:42 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:50.247 22:37:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:50.247 22:37:42 -- common/autotest_common.sh@10 -- # set +x 00:07:50.247 22:37:42 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:50.247 22:37:42 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:50.247 22:37:42 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:50.247 22:37:42 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:50.247 22:37:42 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:50.247 22:37:42 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:50.247 22:37:42 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:50.247 22:37:42 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:50.247 22:37:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:50.247 22:37:42 -- common/autotest_common.sh@10 -- # set +x 00:07:50.247 ************************************ 00:07:50.247 START TEST nvmf_tcp 00:07:50.247 ************************************ 00:07:50.247 22:37:42 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:50.247 * Looking for test storage... 00:07:50.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:50.247 22:37:42 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:50.247 22:37:42 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:50.247 22:37:42 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:50.247 22:37:42 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:50.247 22:37:42 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:50.247 22:37:42 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:50.247 22:37:42 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:50.247 22:37:42 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:50.247 22:37:42 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:50.247 22:37:42 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:50.247 22:37:42 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:50.247 22:37:42 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:50.247 22:37:42 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:50.247 22:37:42 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:50.247 22:37:42 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:50.247 22:37:42 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:50.247 22:37:42 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:50.247 22:37:42 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:50.247 22:37:42 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:50.247 22:37:42 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:50.247 22:37:42 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:50.247 22:37:42 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.247 22:37:42 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.247 22:37:42 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.247 22:37:42 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.247 22:37:42 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.247 22:37:42 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.247 22:37:42 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:50.247 22:37:42 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.247 22:37:42 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:50.247 22:37:42 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:50.247 22:37:42 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:50.247 22:37:42 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:50.248 22:37:42 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:50.248 22:37:42 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:50.248 22:37:42 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:50.248 22:37:42 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:50.248 22:37:42 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:50.248 22:37:42 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:50.248 22:37:42 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:50.248 22:37:42 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:50.248 22:37:42 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:50.248 22:37:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:50.248 22:37:42 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:50.248 22:37:42 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:50.248 22:37:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:50.248 22:37:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:50.248 22:37:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:50.507 ************************************ 00:07:50.507 START TEST nvmf_example 00:07:50.507 ************************************ 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:50.507 * Looking for test storage... 00:07:50.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:50.507 22:37:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:52.409 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:52.409 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:52.409 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:52.410 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:52.410 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:52.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:52.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:07:52.410 00:07:52.410 --- 10.0.0.2 ping statistics --- 00:07:52.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.410 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:52.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:52.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:07:52.410 00:07:52.410 --- 10.0.0.1 ping statistics --- 00:07:52.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.410 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3426633 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3426633 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 3426633 ']' 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:52.410 22:37:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:52.410 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.343 22:37:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:53.343 22:37:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:07:53.343 22:37:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:53.343 22:37:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:53.343 22:37:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:53.343 22:37:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:53.343 22:37:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.343 22:37:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:53.601 22:37:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.601 22:37:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:53.601 22:37:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.601 22:37:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:53.602 22:37:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.602 22:37:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:53.602 22:37:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:53.602 22:37:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.602 22:37:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:53.602 22:37:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.602 22:37:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:53.602 22:37:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:53.602 22:37:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.602 22:37:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:53.602 22:37:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.602 22:37:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:53.602 22:37:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.602 22:37:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:53.602 22:37:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.602 22:37:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:53.602 22:37:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:53.602 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.846 Initializing NVMe Controllers 00:08:05.846 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:05.846 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:05.846 Initialization complete. Launching workers. 00:08:05.846 ======================================================== 00:08:05.846 Latency(us) 00:08:05.846 Device Information : IOPS MiB/s Average min max 00:08:05.846 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15008.90 58.63 4263.92 819.29 16540.18 00:08:05.846 ======================================================== 00:08:05.846 Total : 15008.90 58.63 4263.92 819.29 16540.18 00:08:05.846 00:08:05.846 22:37:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:05.846 22:37:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:05.846 22:37:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:05.846 22:37:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:08:05.846 22:37:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:05.846 22:37:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:08:05.846 22:37:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:05.846 22:37:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:05.846 rmmod nvme_tcp 00:08:05.846 rmmod nvme_fabrics 00:08:05.846 rmmod nvme_keyring 00:08:05.846 22:37:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:05.846 22:37:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:08:05.846 22:37:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:08:05.846 22:37:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3426633 ']' 00:08:05.846 22:37:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3426633 00:08:05.846 22:37:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 3426633 ']' 00:08:05.846 22:37:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 3426633 00:08:05.846 22:37:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:08:05.846 22:37:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:05.846 22:37:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3426633 00:08:05.846 22:37:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:08:05.846 22:37:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:08:05.846 22:37:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3426633' 00:08:05.846 killing process with pid 3426633 00:08:05.846 22:37:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 3426633 00:08:05.846 22:37:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 3426633 00:08:05.846 nvmf threads initialize successfully 00:08:05.846 bdev subsystem init successfully 00:08:05.846 created a nvmf target service 00:08:05.846 create targets's poll groups done 00:08:05.846 all subsystems of target started 00:08:05.846 nvmf target is running 00:08:05.846 all subsystems of target stopped 00:08:05.846 destroy targets's poll groups done 00:08:05.846 destroyed the nvmf target service 00:08:05.846 bdev subsystem finish successfully 00:08:05.846 nvmf threads destroy successfully 00:08:05.846 22:37:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:05.846 22:37:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:05.846 22:37:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:05.846 22:37:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:05.846 22:37:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:05.846 22:37:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.846 22:37:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:05.846 22:37:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.106 22:37:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:06.106 22:37:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:06.106 22:37:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:06.106 22:37:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:06.106 00:08:06.106 real 0m15.815s 00:08:06.106 user 0m45.352s 00:08:06.106 sys 0m3.166s 00:08:06.106 22:37:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:06.106 22:37:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:06.106 ************************************ 00:08:06.106 END TEST nvmf_example 00:08:06.106 ************************************ 00:08:06.106 22:37:58 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:06.106 22:37:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:06.106 22:37:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:06.106 22:37:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:06.368 ************************************ 00:08:06.368 START TEST nvmf_filesystem 00:08:06.368 ************************************ 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:06.368 * Looking for test storage... 00:08:06.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:06.368 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:06.369 #define SPDK_CONFIG_H 00:08:06.369 #define SPDK_CONFIG_APPS 1 00:08:06.369 #define SPDK_CONFIG_ARCH native 00:08:06.369 #undef SPDK_CONFIG_ASAN 00:08:06.369 #undef SPDK_CONFIG_AVAHI 00:08:06.369 #undef SPDK_CONFIG_CET 00:08:06.369 #define SPDK_CONFIG_COVERAGE 1 00:08:06.369 #define SPDK_CONFIG_CROSS_PREFIX 00:08:06.369 #undef SPDK_CONFIG_CRYPTO 00:08:06.369 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:06.369 #undef SPDK_CONFIG_CUSTOMOCF 00:08:06.369 #undef SPDK_CONFIG_DAOS 00:08:06.369 #define SPDK_CONFIG_DAOS_DIR 00:08:06.369 #define SPDK_CONFIG_DEBUG 1 00:08:06.369 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:06.369 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:06.369 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:06.369 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:06.369 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:06.369 #undef SPDK_CONFIG_DPDK_UADK 00:08:06.369 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:06.369 #define SPDK_CONFIG_EXAMPLES 1 00:08:06.369 #undef SPDK_CONFIG_FC 00:08:06.369 #define SPDK_CONFIG_FC_PATH 00:08:06.369 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:06.369 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:06.369 #undef SPDK_CONFIG_FUSE 00:08:06.369 #undef SPDK_CONFIG_FUZZER 00:08:06.369 #define SPDK_CONFIG_FUZZER_LIB 00:08:06.369 #undef SPDK_CONFIG_GOLANG 00:08:06.369 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:06.369 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:06.369 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:06.369 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:06.369 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:06.369 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:06.369 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:06.369 #define SPDK_CONFIG_IDXD 1 00:08:06.369 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:06.369 #undef SPDK_CONFIG_IPSEC_MB 00:08:06.369 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:06.369 #define SPDK_CONFIG_ISAL 1 00:08:06.369 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:06.369 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:06.369 #define SPDK_CONFIG_LIBDIR 00:08:06.369 #undef SPDK_CONFIG_LTO 00:08:06.369 #define SPDK_CONFIG_MAX_LCORES 00:08:06.369 #define SPDK_CONFIG_NVME_CUSE 1 00:08:06.369 #undef SPDK_CONFIG_OCF 00:08:06.369 #define SPDK_CONFIG_OCF_PATH 00:08:06.369 #define SPDK_CONFIG_OPENSSL_PATH 00:08:06.369 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:06.369 #define SPDK_CONFIG_PGO_DIR 00:08:06.369 #undef SPDK_CONFIG_PGO_USE 00:08:06.369 #define SPDK_CONFIG_PREFIX /usr/local 00:08:06.369 #undef SPDK_CONFIG_RAID5F 00:08:06.369 #undef SPDK_CONFIG_RBD 00:08:06.369 #define SPDK_CONFIG_RDMA 1 00:08:06.369 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:06.369 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:06.369 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:06.369 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:06.369 #define SPDK_CONFIG_SHARED 1 00:08:06.369 #undef SPDK_CONFIG_SMA 00:08:06.369 #define SPDK_CONFIG_TESTS 1 00:08:06.369 #undef SPDK_CONFIG_TSAN 00:08:06.369 #define SPDK_CONFIG_UBLK 1 00:08:06.369 #define SPDK_CONFIG_UBSAN 1 00:08:06.369 #undef SPDK_CONFIG_UNIT_TESTS 00:08:06.369 #undef SPDK_CONFIG_URING 00:08:06.369 #define SPDK_CONFIG_URING_PATH 00:08:06.369 #undef SPDK_CONFIG_URING_ZNS 00:08:06.369 #undef SPDK_CONFIG_USDT 00:08:06.369 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:06.369 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:06.369 #define SPDK_CONFIG_VFIO_USER 1 00:08:06.369 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:06.369 #define SPDK_CONFIG_VHOST 1 00:08:06.369 #define SPDK_CONFIG_VIRTIO 1 00:08:06.369 #undef SPDK_CONFIG_VTUNE 00:08:06.369 #define SPDK_CONFIG_VTUNE_DIR 00:08:06.369 #define SPDK_CONFIG_WERROR 1 00:08:06.369 #define SPDK_CONFIG_WPDK_DIR 00:08:06.369 #undef SPDK_CONFIG_XNVME 00:08:06.369 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.369 22:37:58 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 1 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:08:06.370 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : v22.11.4 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:08:06.371 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j48 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 3428399 ]] 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 3428399 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.fFZ6W4 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.fFZ6W4/tests/target /tmp/spdk.fFZ6W4 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=919711744 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4364718080 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=53508956160 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=61994713088 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8485756928 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30941720576 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997356544 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=55635968 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12390182912 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=12398944256 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8761344 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30996250624 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997356544 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=1105920 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6199463936 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6199468032 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:08:06.372 * Looking for test storage... 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=53508956160 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=10700349440 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:06.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:06.372 22:37:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:06.373 22:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.277 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:08.277 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:08.277 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:08.277 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:08.277 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:08.277 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:08.277 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:08.277 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:08.277 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:08.277 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:08:08.277 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:08.277 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:08:08.277 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:08.277 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:08:08.277 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:08.277 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:08.277 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:08.277 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:08.277 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:08.277 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:08.277 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:08.277 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:08.277 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:08.277 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:08.277 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:08.277 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:08.277 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:08.277 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:08.277 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:08.277 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:08.277 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:08.277 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:08.277 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:08.536 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:08.536 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:08.536 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:08.536 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:08.536 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:08.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:08.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:08:08.537 00:08:08.537 --- 10.0.0.2 ping statistics --- 00:08:08.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.537 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:08:08.537 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:08.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:08.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:08:08.537 00:08:08.537 --- 10.0.0.1 ping statistics --- 00:08:08.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.537 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:08:08.537 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:08.537 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:08:08.537 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:08.537 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:08.537 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:08.537 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:08.537 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:08.537 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:08.537 22:38:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:08.537 22:38:00 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:08.537 22:38:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:08.537 22:38:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:08.537 22:38:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.537 ************************************ 00:08:08.537 START TEST nvmf_filesystem_no_in_capsule 00:08:08.537 ************************************ 00:08:08.537 22:38:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:08:08.537 22:38:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:08:08.537 22:38:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:08.537 22:38:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:08.537 22:38:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:08.537 22:38:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.537 22:38:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3430021 00:08:08.537 22:38:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:08.537 22:38:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3430021 00:08:08.537 22:38:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 3430021 ']' 00:08:08.537 22:38:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.537 22:38:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:08.537 22:38:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.537 22:38:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:08.537 22:38:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.537 [2024-07-26 22:38:01.033741] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:08.537 [2024-07-26 22:38:01.033817] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.795 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.796 [2024-07-26 22:38:01.107601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:08.796 [2024-07-26 22:38:01.204276] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:08.796 [2024-07-26 22:38:01.204340] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:08.796 [2024-07-26 22:38:01.204365] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:08.796 [2024-07-26 22:38:01.204379] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:08.796 [2024-07-26 22:38:01.204392] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:08.796 [2024-07-26 22:38:01.204459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.796 [2024-07-26 22:38:01.204515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:08.796 [2024-07-26 22:38:01.204567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:08.796 [2024-07-26 22:38:01.204569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.054 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:09.054 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:08:09.054 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:09.054 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:09.054 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.054 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:09.054 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:09.054 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:09.054 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.054 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.054 [2024-07-26 22:38:01.360975] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:09.054 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.054 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:09.054 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.054 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.054 Malloc1 00:08:09.054 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.054 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:09.054 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.054 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.054 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.054 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:09.054 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.054 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.054 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.054 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:09.054 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.054 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.054 [2024-07-26 22:38:01.545253] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.054 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.055 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:09.055 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:08:09.055 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:08:09.055 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:08:09.055 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:08:09.055 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:09.055 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.055 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.312 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.312 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:08:09.312 { 00:08:09.312 "name": "Malloc1", 00:08:09.312 "aliases": [ 00:08:09.312 "59473e4c-127b-4199-a960-e2026315ba02" 00:08:09.312 ], 00:08:09.312 "product_name": "Malloc disk", 00:08:09.312 "block_size": 512, 00:08:09.312 "num_blocks": 1048576, 00:08:09.312 "uuid": "59473e4c-127b-4199-a960-e2026315ba02", 00:08:09.312 "assigned_rate_limits": { 00:08:09.312 "rw_ios_per_sec": 0, 00:08:09.312 "rw_mbytes_per_sec": 0, 00:08:09.312 "r_mbytes_per_sec": 0, 00:08:09.312 "w_mbytes_per_sec": 0 00:08:09.312 }, 00:08:09.312 "claimed": true, 00:08:09.312 "claim_type": "exclusive_write", 00:08:09.312 "zoned": false, 00:08:09.312 "supported_io_types": { 00:08:09.312 "read": true, 00:08:09.312 "write": true, 00:08:09.312 "unmap": true, 00:08:09.312 "write_zeroes": true, 00:08:09.312 "flush": true, 00:08:09.312 "reset": true, 00:08:09.312 "compare": false, 00:08:09.312 "compare_and_write": false, 00:08:09.312 "abort": true, 00:08:09.312 "nvme_admin": false, 00:08:09.312 "nvme_io": false 00:08:09.312 }, 00:08:09.312 "memory_domains": [ 00:08:09.312 { 00:08:09.312 "dma_device_id": "system", 00:08:09.312 "dma_device_type": 1 00:08:09.312 }, 00:08:09.312 { 00:08:09.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.312 "dma_device_type": 2 00:08:09.312 } 00:08:09.312 ], 00:08:09.312 "driver_specific": {} 00:08:09.312 } 00:08:09.312 ]' 00:08:09.312 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:08:09.312 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:08:09.312 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:08:09.312 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:08:09.312 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:08:09.312 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:08:09.312 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:09.312 22:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:09.875 22:38:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:09.875 22:38:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:08:09.875 22:38:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:09.875 22:38:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:09.875 22:38:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:08:12.399 22:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:12.399 22:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:12.399 22:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:12.399 22:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:12.399 22:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:12.399 22:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:08:12.399 22:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:12.399 22:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:12.399 22:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:12.399 22:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:12.399 22:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:12.400 22:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:12.400 22:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:12.400 22:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:12.400 22:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:12.400 22:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:12.400 22:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:12.400 22:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:12.400 22:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:13.330 22:38:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:13.330 22:38:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:13.330 22:38:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:13.330 22:38:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:13.330 22:38:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:13.330 ************************************ 00:08:13.330 START TEST filesystem_ext4 00:08:13.330 ************************************ 00:08:13.330 22:38:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:13.330 22:38:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:13.330 22:38:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:13.330 22:38:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:13.330 22:38:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:08:13.330 22:38:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:13.330 22:38:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:08:13.330 22:38:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:08:13.330 22:38:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:08:13.330 22:38:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:08:13.330 22:38:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:13.330 mke2fs 1.46.5 (30-Dec-2021) 00:08:13.588 Discarding device blocks: 0/522240 done 00:08:13.588 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:13.588 Filesystem UUID: 9c55168d-5f80-4d96-9c00-c4f8b035d9a6 00:08:13.588 Superblock backups stored on blocks: 00:08:13.588 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:13.588 00:08:13.588 Allocating group tables: 0/64 done 00:08:13.588 Writing inode tables: 0/64 done 00:08:14.518 Creating journal (8192 blocks): done 00:08:14.518 Writing superblocks and filesystem accounting information: 0/64 done 00:08:14.518 00:08:14.518 22:38:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:08:14.518 22:38:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:14.776 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:14.776 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:14.776 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:14.776 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:14.776 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:14.776 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:14.776 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3430021 00:08:14.776 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:14.776 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:14.776 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:14.776 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:14.776 00:08:14.776 real 0m1.432s 00:08:14.776 user 0m0.020s 00:08:14.776 sys 0m0.055s 00:08:14.776 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:14.776 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:14.776 ************************************ 00:08:14.776 END TEST filesystem_ext4 00:08:14.776 ************************************ 00:08:14.776 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:14.776 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:14.776 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:14.776 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:14.776 ************************************ 00:08:14.776 START TEST filesystem_btrfs 00:08:14.776 ************************************ 00:08:14.776 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:14.776 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:14.776 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:14.776 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:14.776 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:08:14.776 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:14.776 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:08:14.776 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:08:14.776 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:08:14.776 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:08:14.776 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:15.034 btrfs-progs v6.6.2 00:08:15.034 See https://btrfs.readthedocs.io for more information. 00:08:15.034 00:08:15.034 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:15.034 NOTE: several default settings have changed in version 5.15, please make sure 00:08:15.034 this does not affect your deployments: 00:08:15.034 - DUP for metadata (-m dup) 00:08:15.034 - enabled no-holes (-O no-holes) 00:08:15.034 - enabled free-space-tree (-R free-space-tree) 00:08:15.034 00:08:15.034 Label: (null) 00:08:15.034 UUID: 1a6188d1-cb47-47bc-9366-c31729f83161 00:08:15.034 Node size: 16384 00:08:15.034 Sector size: 4096 00:08:15.034 Filesystem size: 510.00MiB 00:08:15.034 Block group profiles: 00:08:15.034 Data: single 8.00MiB 00:08:15.034 Metadata: DUP 32.00MiB 00:08:15.034 System: DUP 8.00MiB 00:08:15.034 SSD detected: yes 00:08:15.034 Zoned device: no 00:08:15.034 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:15.034 Runtime features: free-space-tree 00:08:15.034 Checksum: crc32c 00:08:15.034 Number of devices: 1 00:08:15.034 Devices: 00:08:15.034 ID SIZE PATH 00:08:15.034 1 510.00MiB /dev/nvme0n1p1 00:08:15.034 00:08:15.034 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:08:15.035 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:15.292 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:15.292 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:15.292 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:15.292 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:15.292 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:15.292 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:15.292 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3430021 00:08:15.292 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:15.292 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:15.551 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:15.551 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:15.551 00:08:15.551 real 0m0.534s 00:08:15.551 user 0m0.016s 00:08:15.551 sys 0m0.112s 00:08:15.551 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:15.551 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:15.551 ************************************ 00:08:15.551 END TEST filesystem_btrfs 00:08:15.551 ************************************ 00:08:15.551 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:15.551 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:15.551 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:15.551 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:15.551 ************************************ 00:08:15.551 START TEST filesystem_xfs 00:08:15.551 ************************************ 00:08:15.551 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:08:15.551 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:15.551 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:15.551 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:15.551 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:08:15.551 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:15.551 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:08:15.551 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:08:15.551 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:08:15.551 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:08:15.551 22:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:15.551 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:15.551 = sectsz=512 attr=2, projid32bit=1 00:08:15.551 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:15.551 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:15.551 data = bsize=4096 blocks=130560, imaxpct=25 00:08:15.551 = sunit=0 swidth=0 blks 00:08:15.551 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:15.551 log =internal log bsize=4096 blocks=16384, version=2 00:08:15.551 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:15.551 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:16.483 Discarding blocks...Done. 00:08:16.483 22:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:08:16.483 22:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:19.006 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:19.006 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:19.006 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:19.006 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:19.006 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:19.006 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:19.263 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3430021 00:08:19.263 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:19.263 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:19.263 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:19.263 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:19.263 00:08:19.263 real 0m3.684s 00:08:19.263 user 0m0.012s 00:08:19.263 sys 0m0.067s 00:08:19.263 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:19.263 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:19.263 ************************************ 00:08:19.263 END TEST filesystem_xfs 00:08:19.263 ************************************ 00:08:19.263 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:19.521 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:19.521 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:19.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:19.521 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:19.521 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:19.521 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:19.521 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:19.521 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:19.521 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:19.521 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:19.521 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:19.521 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.521 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:19.521 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.521 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:19.521 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3430021 00:08:19.521 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 3430021 ']' 00:08:19.521 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 3430021 00:08:19.521 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:19.521 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:19.521 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3430021 00:08:19.521 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:19.521 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:19.521 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3430021' 00:08:19.521 killing process with pid 3430021 00:08:19.521 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 3430021 00:08:19.521 22:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 3430021 00:08:20.087 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:20.087 00:08:20.087 real 0m11.400s 00:08:20.087 user 0m43.639s 00:08:20.087 sys 0m1.778s 00:08:20.087 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:20.087 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:20.087 ************************************ 00:08:20.087 END TEST nvmf_filesystem_no_in_capsule 00:08:20.087 ************************************ 00:08:20.087 22:38:12 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:20.087 22:38:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:20.087 22:38:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:20.087 22:38:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.087 ************************************ 00:08:20.087 START TEST nvmf_filesystem_in_capsule 00:08:20.087 ************************************ 00:08:20.087 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:08:20.087 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:20.087 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:20.088 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:20.088 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:20.088 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:20.088 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3431576 00:08:20.088 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:20.088 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3431576 00:08:20.088 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 3431576 ']' 00:08:20.088 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.088 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:20.088 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.088 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:20.088 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:20.088 [2024-07-26 22:38:12.476416] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:20.088 [2024-07-26 22:38:12.476516] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:20.088 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.088 [2024-07-26 22:38:12.540920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:20.346 [2024-07-26 22:38:12.630463] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:20.346 [2024-07-26 22:38:12.630525] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:20.346 [2024-07-26 22:38:12.630554] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:20.346 [2024-07-26 22:38:12.630565] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:20.346 [2024-07-26 22:38:12.630575] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:20.346 [2024-07-26 22:38:12.630657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.346 [2024-07-26 22:38:12.630724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:20.346 [2024-07-26 22:38:12.630790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:20.346 [2024-07-26 22:38:12.630792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.346 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:20.346 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:08:20.346 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:20.346 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:20.346 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:20.346 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.346 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:20.346 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:20.346 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.346 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:20.346 [2024-07-26 22:38:12.774827] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:20.346 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.346 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:20.346 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.346 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:20.605 Malloc1 00:08:20.605 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.605 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:20.605 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.605 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:20.605 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.605 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:20.605 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.605 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:20.605 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.605 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:20.605 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.605 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:20.605 [2024-07-26 22:38:12.966269] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:20.605 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.605 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:20.605 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:08:20.605 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:08:20.605 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:08:20.605 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:08:20.605 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:20.605 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.605 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:20.605 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.605 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:08:20.605 { 00:08:20.605 "name": "Malloc1", 00:08:20.605 "aliases": [ 00:08:20.605 "04755dc3-d3fd-49d7-a1a7-0b0fda7a81ba" 00:08:20.605 ], 00:08:20.605 "product_name": "Malloc disk", 00:08:20.605 "block_size": 512, 00:08:20.605 "num_blocks": 1048576, 00:08:20.605 "uuid": "04755dc3-d3fd-49d7-a1a7-0b0fda7a81ba", 00:08:20.605 "assigned_rate_limits": { 00:08:20.605 "rw_ios_per_sec": 0, 00:08:20.605 "rw_mbytes_per_sec": 0, 00:08:20.605 "r_mbytes_per_sec": 0, 00:08:20.605 "w_mbytes_per_sec": 0 00:08:20.605 }, 00:08:20.605 "claimed": true, 00:08:20.605 "claim_type": "exclusive_write", 00:08:20.605 "zoned": false, 00:08:20.605 "supported_io_types": { 00:08:20.605 "read": true, 00:08:20.605 "write": true, 00:08:20.605 "unmap": true, 00:08:20.605 "write_zeroes": true, 00:08:20.605 "flush": true, 00:08:20.605 "reset": true, 00:08:20.605 "compare": false, 00:08:20.605 "compare_and_write": false, 00:08:20.605 "abort": true, 00:08:20.605 "nvme_admin": false, 00:08:20.605 "nvme_io": false 00:08:20.605 }, 00:08:20.605 "memory_domains": [ 00:08:20.605 { 00:08:20.605 "dma_device_id": "system", 00:08:20.605 "dma_device_type": 1 00:08:20.605 }, 00:08:20.605 { 00:08:20.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.605 "dma_device_type": 2 00:08:20.605 } 00:08:20.605 ], 00:08:20.605 "driver_specific": {} 00:08:20.605 } 00:08:20.605 ]' 00:08:20.605 22:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:08:20.605 22:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:08:20.605 22:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:08:20.605 22:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:08:20.605 22:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:08:20.605 22:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:08:20.605 22:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:20.605 22:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:21.544 22:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:21.544 22:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:08:21.544 22:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:21.544 22:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:21.544 22:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:08:23.483 22:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:23.483 22:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:23.483 22:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:23.483 22:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:23.483 22:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:23.483 22:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:08:23.483 22:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:23.483 22:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:23.483 22:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:23.483 22:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:23.483 22:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:23.483 22:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:23.483 22:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:23.483 22:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:23.483 22:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:23.483 22:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:23.483 22:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:23.741 22:38:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:24.673 22:38:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:25.606 22:38:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:25.606 22:38:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:25.606 22:38:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:25.606 22:38:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:25.606 22:38:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:25.606 ************************************ 00:08:25.606 START TEST filesystem_in_capsule_ext4 00:08:25.606 ************************************ 00:08:25.606 22:38:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:25.606 22:38:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:25.606 22:38:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:25.606 22:38:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:25.606 22:38:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:08:25.606 22:38:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:25.606 22:38:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:08:25.606 22:38:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:08:25.606 22:38:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:08:25.606 22:38:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:08:25.606 22:38:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:25.606 mke2fs 1.46.5 (30-Dec-2021) 00:08:25.864 Discarding device blocks: 0/522240 done 00:08:25.864 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:25.864 Filesystem UUID: b0dea593-3f61-4ca8-baa2-5e15816b83d1 00:08:25.864 Superblock backups stored on blocks: 00:08:25.864 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:25.864 00:08:25.864 Allocating group tables: 0/64 done 00:08:25.864 Writing inode tables: 0/64 done 00:08:25.864 Creating journal (8192 blocks): done 00:08:26.944 Writing superblocks and filesystem accounting information: 0/6428/64 done 00:08:26.944 00:08:26.944 22:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:08:26.944 22:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:27.510 22:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:27.510 22:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:27.510 22:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:27.510 22:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:27.510 22:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:27.510 22:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:27.510 22:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3431576 00:08:27.510 22:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:27.510 22:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:27.510 22:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:27.510 22:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:27.510 00:08:27.510 real 0m1.882s 00:08:27.510 user 0m0.021s 00:08:27.510 sys 0m0.057s 00:08:27.510 22:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:27.510 22:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:27.510 ************************************ 00:08:27.510 END TEST filesystem_in_capsule_ext4 00:08:27.510 ************************************ 00:08:27.510 22:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:27.510 22:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:27.510 22:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:27.510 22:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:27.510 ************************************ 00:08:27.510 START TEST filesystem_in_capsule_btrfs 00:08:27.510 ************************************ 00:08:27.510 22:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:27.510 22:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:27.510 22:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:27.510 22:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:27.510 22:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:08:27.510 22:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:27.510 22:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:08:27.510 22:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:08:27.510 22:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:08:27.510 22:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:08:27.510 22:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:27.769 btrfs-progs v6.6.2 00:08:27.769 See https://btrfs.readthedocs.io for more information. 00:08:27.769 00:08:27.769 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:27.769 NOTE: several default settings have changed in version 5.15, please make sure 00:08:27.769 this does not affect your deployments: 00:08:27.769 - DUP for metadata (-m dup) 00:08:27.769 - enabled no-holes (-O no-holes) 00:08:27.769 - enabled free-space-tree (-R free-space-tree) 00:08:27.769 00:08:27.769 Label: (null) 00:08:27.769 UUID: 68edf661-8675-4a52-8a95-9e9f41df3e1c 00:08:27.769 Node size: 16384 00:08:27.769 Sector size: 4096 00:08:27.769 Filesystem size: 510.00MiB 00:08:27.769 Block group profiles: 00:08:27.769 Data: single 8.00MiB 00:08:27.769 Metadata: DUP 32.00MiB 00:08:27.769 System: DUP 8.00MiB 00:08:27.769 SSD detected: yes 00:08:27.769 Zoned device: no 00:08:27.769 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:27.769 Runtime features: free-space-tree 00:08:27.769 Checksum: crc32c 00:08:27.769 Number of devices: 1 00:08:27.769 Devices: 00:08:27.769 ID SIZE PATH 00:08:27.769 1 510.00MiB /dev/nvme0n1p1 00:08:27.769 00:08:27.769 22:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:08:27.769 22:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:28.027 22:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:28.027 22:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:28.027 22:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:28.027 22:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:28.027 22:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:28.027 22:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:28.027 22:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3431576 00:08:28.027 22:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:28.027 22:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:28.027 22:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:28.027 22:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:28.027 00:08:28.027 real 0m0.435s 00:08:28.027 user 0m0.016s 00:08:28.027 sys 0m0.112s 00:08:28.027 22:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:28.027 22:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:28.027 ************************************ 00:08:28.027 END TEST filesystem_in_capsule_btrfs 00:08:28.027 ************************************ 00:08:28.027 22:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:28.027 22:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:28.027 22:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:28.027 22:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:28.027 ************************************ 00:08:28.027 START TEST filesystem_in_capsule_xfs 00:08:28.027 ************************************ 00:08:28.027 22:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:08:28.027 22:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:28.027 22:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:28.027 22:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:28.027 22:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:08:28.027 22:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:28.027 22:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:08:28.027 22:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:08:28.027 22:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:08:28.027 22:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:08:28.027 22:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:28.027 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:28.027 = sectsz=512 attr=2, projid32bit=1 00:08:28.027 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:28.027 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:28.027 data = bsize=4096 blocks=130560, imaxpct=25 00:08:28.027 = sunit=0 swidth=0 blks 00:08:28.027 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:28.027 log =internal log bsize=4096 blocks=16384, version=2 00:08:28.027 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:28.027 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:29.401 Discarding blocks...Done. 00:08:29.401 22:38:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:08:29.401 22:38:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:31.925 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:31.925 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:31.925 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:31.925 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:31.925 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:31.925 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:31.925 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3431576 00:08:31.925 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:31.925 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:31.925 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:31.925 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:31.925 00:08:31.925 real 0m3.714s 00:08:31.925 user 0m0.026s 00:08:31.925 sys 0m0.054s 00:08:31.925 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:31.925 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:31.925 ************************************ 00:08:31.925 END TEST filesystem_in_capsule_xfs 00:08:31.925 ************************************ 00:08:31.925 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:31.925 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:31.925 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:31.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.925 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:31.925 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:31.925 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:31.925 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:31.925 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:31.925 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:31.925 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:31.925 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:31.925 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.925 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:31.925 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.925 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:31.925 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3431576 00:08:31.926 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 3431576 ']' 00:08:31.926 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 3431576 00:08:31.926 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:31.926 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:31.926 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3431576 00:08:31.926 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:31.926 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:31.926 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3431576' 00:08:31.926 killing process with pid 3431576 00:08:31.926 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 3431576 00:08:31.926 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 3431576 00:08:32.491 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:32.491 00:08:32.491 real 0m12.380s 00:08:32.491 user 0m47.610s 00:08:32.491 sys 0m1.803s 00:08:32.491 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:32.491 22:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:32.491 ************************************ 00:08:32.491 END TEST nvmf_filesystem_in_capsule 00:08:32.491 ************************************ 00:08:32.491 22:38:24 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:32.491 22:38:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:32.491 22:38:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:32.491 22:38:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:32.491 22:38:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:32.491 22:38:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:32.491 22:38:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:32.491 rmmod nvme_tcp 00:08:32.491 rmmod nvme_fabrics 00:08:32.491 rmmod nvme_keyring 00:08:32.491 22:38:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:32.491 22:38:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:32.491 22:38:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:32.491 22:38:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:32.491 22:38:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:32.491 22:38:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:32.491 22:38:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:32.491 22:38:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:32.491 22:38:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:32.491 22:38:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.491 22:38:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:32.491 22:38:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.029 22:38:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:35.029 00:08:35.029 real 0m28.306s 00:08:35.029 user 1m32.176s 00:08:35.029 sys 0m5.169s 00:08:35.029 22:38:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:35.029 22:38:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:35.029 ************************************ 00:08:35.029 END TEST nvmf_filesystem 00:08:35.029 ************************************ 00:08:35.029 22:38:26 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:35.029 22:38:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:35.029 22:38:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:35.029 22:38:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:35.029 ************************************ 00:08:35.029 START TEST nvmf_target_discovery 00:08:35.029 ************************************ 00:08:35.029 22:38:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:35.029 * Looking for test storage... 00:08:35.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:35.029 22:38:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:35.029 22:38:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:35.029 22:38:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:35.029 22:38:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:35.029 22:38:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:35.029 22:38:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:35.029 22:38:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:35.029 22:38:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:35.029 22:38:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:35.029 22:38:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:35.029 22:38:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:35.029 22:38:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:35.029 22:38:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:35.029 22:38:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:35.029 22:38:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:35.029 22:38:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:35.029 22:38:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:35.029 22:38:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:35.029 22:38:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:35.029 22:38:27 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.029 22:38:27 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.029 22:38:27 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.029 22:38:27 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.029 22:38:27 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.029 22:38:27 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.029 22:38:27 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:35.029 22:38:27 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.029 22:38:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:35.029 22:38:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:35.029 22:38:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:35.029 22:38:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:35.029 22:38:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:35.029 22:38:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:35.029 22:38:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:35.029 22:38:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:35.029 22:38:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:35.030 22:38:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:35.030 22:38:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:35.030 22:38:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:35.030 22:38:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:35.030 22:38:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:35.030 22:38:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:35.030 22:38:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:35.030 22:38:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:35.030 22:38:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:35.030 22:38:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:35.030 22:38:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.030 22:38:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:35.030 22:38:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.030 22:38:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:35.030 22:38:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:35.030 22:38:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:35.030 22:38:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:36.933 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:36.933 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:36.933 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:36.934 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:36.934 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:36.934 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:36.934 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:36.934 22:38:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:36.934 22:38:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:36.934 22:38:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:36.935 22:38:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:36.935 22:38:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:36.935 22:38:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:36.935 22:38:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:36.935 22:38:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:36.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:36.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:08:36.935 00:08:36.935 --- 10.0.0.2 ping statistics --- 00:08:36.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.935 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:08:36.935 22:38:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:36.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:36.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:08:36.935 00:08:36.935 --- 10.0.0.1 ping statistics --- 00:08:36.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.935 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:08:36.935 22:38:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:36.935 22:38:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:36.935 22:38:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:36.935 22:38:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:36.935 22:38:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:36.935 22:38:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:36.935 22:38:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:36.935 22:38:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:36.935 22:38:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:36.935 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:36.935 22:38:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:36.935 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:36.935 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:36.935 22:38:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3435072 00:08:36.935 22:38:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3435072 00:08:36.935 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 3435072 ']' 00:08:36.935 22:38:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:36.935 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.935 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:36.935 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.935 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:36.935 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:36.935 [2024-07-26 22:38:29.196330] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:36.935 [2024-07-26 22:38:29.196445] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.935 EAL: No free 2048 kB hugepages reported on node 1 00:08:36.935 [2024-07-26 22:38:29.265800] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:36.935 [2024-07-26 22:38:29.360644] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:36.935 [2024-07-26 22:38:29.360693] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:36.935 [2024-07-26 22:38:29.360709] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:36.935 [2024-07-26 22:38:29.360723] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:36.935 [2024-07-26 22:38:29.360735] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:36.935 [2024-07-26 22:38:29.360820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.935 [2024-07-26 22:38:29.360872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:36.935 [2024-07-26 22:38:29.360906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:36.935 [2024-07-26 22:38:29.360908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.194 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:37.194 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:08:37.194 22:38:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:37.194 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:37.194 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:37.194 22:38:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:37.194 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:37.194 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.194 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:37.194 [2024-07-26 22:38:29.518956] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:37.194 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.194 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:37.194 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:37.194 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:37.194 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.194 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:37.194 Null1 00:08:37.194 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.194 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:37.194 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.194 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:37.194 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.194 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:37.194 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.194 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:37.195 [2024-07-26 22:38:29.559326] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:37.195 Null2 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:37.195 Null3 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:37.195 Null4 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.195 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:08:37.453 00:08:37.453 Discovery Log Number of Records 6, Generation counter 6 00:08:37.453 =====Discovery Log Entry 0====== 00:08:37.453 trtype: tcp 00:08:37.453 adrfam: ipv4 00:08:37.453 subtype: current discovery subsystem 00:08:37.453 treq: not required 00:08:37.453 portid: 0 00:08:37.453 trsvcid: 4420 00:08:37.453 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:37.453 traddr: 10.0.0.2 00:08:37.453 eflags: explicit discovery connections, duplicate discovery information 00:08:37.453 sectype: none 00:08:37.453 =====Discovery Log Entry 1====== 00:08:37.453 trtype: tcp 00:08:37.453 adrfam: ipv4 00:08:37.453 subtype: nvme subsystem 00:08:37.453 treq: not required 00:08:37.453 portid: 0 00:08:37.453 trsvcid: 4420 00:08:37.453 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:37.453 traddr: 10.0.0.2 00:08:37.453 eflags: none 00:08:37.453 sectype: none 00:08:37.453 =====Discovery Log Entry 2====== 00:08:37.453 trtype: tcp 00:08:37.453 adrfam: ipv4 00:08:37.453 subtype: nvme subsystem 00:08:37.453 treq: not required 00:08:37.453 portid: 0 00:08:37.454 trsvcid: 4420 00:08:37.454 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:37.454 traddr: 10.0.0.2 00:08:37.454 eflags: none 00:08:37.454 sectype: none 00:08:37.454 =====Discovery Log Entry 3====== 00:08:37.454 trtype: tcp 00:08:37.454 adrfam: ipv4 00:08:37.454 subtype: nvme subsystem 00:08:37.454 treq: not required 00:08:37.454 portid: 0 00:08:37.454 trsvcid: 4420 00:08:37.454 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:37.454 traddr: 10.0.0.2 00:08:37.454 eflags: none 00:08:37.454 sectype: none 00:08:37.454 =====Discovery Log Entry 4====== 00:08:37.454 trtype: tcp 00:08:37.454 adrfam: ipv4 00:08:37.454 subtype: nvme subsystem 00:08:37.454 treq: not required 00:08:37.454 portid: 0 00:08:37.454 trsvcid: 4420 00:08:37.454 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:37.454 traddr: 10.0.0.2 00:08:37.454 eflags: none 00:08:37.454 sectype: none 00:08:37.454 =====Discovery Log Entry 5====== 00:08:37.454 trtype: tcp 00:08:37.454 adrfam: ipv4 00:08:37.454 subtype: discovery subsystem referral 00:08:37.454 treq: not required 00:08:37.454 portid: 0 00:08:37.454 trsvcid: 4430 00:08:37.454 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:37.454 traddr: 10.0.0.2 00:08:37.454 eflags: none 00:08:37.454 sectype: none 00:08:37.454 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:37.454 Perform nvmf subsystem discovery via RPC 00:08:37.454 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:37.454 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.454 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:37.454 [ 00:08:37.454 { 00:08:37.454 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:37.454 "subtype": "Discovery", 00:08:37.454 "listen_addresses": [ 00:08:37.454 { 00:08:37.454 "trtype": "TCP", 00:08:37.454 "adrfam": "IPv4", 00:08:37.454 "traddr": "10.0.0.2", 00:08:37.454 "trsvcid": "4420" 00:08:37.454 } 00:08:37.454 ], 00:08:37.454 "allow_any_host": true, 00:08:37.454 "hosts": [] 00:08:37.454 }, 00:08:37.454 { 00:08:37.454 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:37.454 "subtype": "NVMe", 00:08:37.454 "listen_addresses": [ 00:08:37.454 { 00:08:37.454 "trtype": "TCP", 00:08:37.454 "adrfam": "IPv4", 00:08:37.454 "traddr": "10.0.0.2", 00:08:37.454 "trsvcid": "4420" 00:08:37.454 } 00:08:37.454 ], 00:08:37.454 "allow_any_host": true, 00:08:37.454 "hosts": [], 00:08:37.454 "serial_number": "SPDK00000000000001", 00:08:37.454 "model_number": "SPDK bdev Controller", 00:08:37.454 "max_namespaces": 32, 00:08:37.454 "min_cntlid": 1, 00:08:37.454 "max_cntlid": 65519, 00:08:37.454 "namespaces": [ 00:08:37.454 { 00:08:37.454 "nsid": 1, 00:08:37.454 "bdev_name": "Null1", 00:08:37.454 "name": "Null1", 00:08:37.454 "nguid": "55113BCA6C0B478F9E4DC2835BEFE152", 00:08:37.454 "uuid": "55113bca-6c0b-478f-9e4d-c2835befe152" 00:08:37.454 } 00:08:37.454 ] 00:08:37.454 }, 00:08:37.454 { 00:08:37.454 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:37.454 "subtype": "NVMe", 00:08:37.454 "listen_addresses": [ 00:08:37.454 { 00:08:37.454 "trtype": "TCP", 00:08:37.454 "adrfam": "IPv4", 00:08:37.454 "traddr": "10.0.0.2", 00:08:37.454 "trsvcid": "4420" 00:08:37.454 } 00:08:37.454 ], 00:08:37.454 "allow_any_host": true, 00:08:37.454 "hosts": [], 00:08:37.454 "serial_number": "SPDK00000000000002", 00:08:37.454 "model_number": "SPDK bdev Controller", 00:08:37.454 "max_namespaces": 32, 00:08:37.454 "min_cntlid": 1, 00:08:37.454 "max_cntlid": 65519, 00:08:37.454 "namespaces": [ 00:08:37.454 { 00:08:37.454 "nsid": 1, 00:08:37.454 "bdev_name": "Null2", 00:08:37.454 "name": "Null2", 00:08:37.454 "nguid": "9D9361A2702D4893A0BCF5B2C9F39AEA", 00:08:37.454 "uuid": "9d9361a2-702d-4893-a0bc-f5b2c9f39aea" 00:08:37.454 } 00:08:37.454 ] 00:08:37.454 }, 00:08:37.454 { 00:08:37.454 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:37.454 "subtype": "NVMe", 00:08:37.454 "listen_addresses": [ 00:08:37.454 { 00:08:37.454 "trtype": "TCP", 00:08:37.454 "adrfam": "IPv4", 00:08:37.454 "traddr": "10.0.0.2", 00:08:37.454 "trsvcid": "4420" 00:08:37.454 } 00:08:37.454 ], 00:08:37.454 "allow_any_host": true, 00:08:37.454 "hosts": [], 00:08:37.454 "serial_number": "SPDK00000000000003", 00:08:37.454 "model_number": "SPDK bdev Controller", 00:08:37.454 "max_namespaces": 32, 00:08:37.454 "min_cntlid": 1, 00:08:37.454 "max_cntlid": 65519, 00:08:37.454 "namespaces": [ 00:08:37.454 { 00:08:37.454 "nsid": 1, 00:08:37.454 "bdev_name": "Null3", 00:08:37.454 "name": "Null3", 00:08:37.454 "nguid": "9A4B258265454FC6A125245645E324FD", 00:08:37.454 "uuid": "9a4b2582-6545-4fc6-a125-245645e324fd" 00:08:37.454 } 00:08:37.454 ] 00:08:37.454 }, 00:08:37.454 { 00:08:37.454 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:37.454 "subtype": "NVMe", 00:08:37.454 "listen_addresses": [ 00:08:37.454 { 00:08:37.454 "trtype": "TCP", 00:08:37.454 "adrfam": "IPv4", 00:08:37.454 "traddr": "10.0.0.2", 00:08:37.454 "trsvcid": "4420" 00:08:37.454 } 00:08:37.454 ], 00:08:37.454 "allow_any_host": true, 00:08:37.454 "hosts": [], 00:08:37.454 "serial_number": "SPDK00000000000004", 00:08:37.454 "model_number": "SPDK bdev Controller", 00:08:37.454 "max_namespaces": 32, 00:08:37.454 "min_cntlid": 1, 00:08:37.454 "max_cntlid": 65519, 00:08:37.454 "namespaces": [ 00:08:37.454 { 00:08:37.454 "nsid": 1, 00:08:37.454 "bdev_name": "Null4", 00:08:37.454 "name": "Null4", 00:08:37.454 "nguid": "D88D0D89524B4648993F15FCB27A71E2", 00:08:37.454 "uuid": "d88d0d89-524b-4648-993f-15fcb27a71e2" 00:08:37.454 } 00:08:37.454 ] 00:08:37.454 } 00:08:37.454 ] 00:08:37.454 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.454 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:37.454 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:37.454 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:37.454 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.454 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:37.454 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.454 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:37.454 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.454 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:37.454 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.454 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:37.454 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:37.454 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.454 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:37.454 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.454 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:37.454 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.454 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:37.454 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.454 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:37.454 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:37.454 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.454 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:37.454 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.454 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:37.454 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.454 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:37.454 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.454 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:37.454 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:37.454 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.454 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:37.713 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.713 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:37.713 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.713 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:37.713 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.713 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:37.713 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.713 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:37.713 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.713 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:37.713 22:38:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:37.713 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.713 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:37.713 22:38:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.713 22:38:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:37.713 22:38:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:37.713 22:38:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:37.713 22:38:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:37.713 22:38:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:37.713 22:38:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:37.713 22:38:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:37.713 22:38:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:37.713 22:38:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:37.713 22:38:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:37.713 rmmod nvme_tcp 00:08:37.713 rmmod nvme_fabrics 00:08:37.713 rmmod nvme_keyring 00:08:37.713 22:38:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:37.713 22:38:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:37.713 22:38:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:37.713 22:38:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3435072 ']' 00:08:37.713 22:38:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3435072 00:08:37.713 22:38:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 3435072 ']' 00:08:37.713 22:38:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 3435072 00:08:37.713 22:38:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:08:37.713 22:38:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:37.714 22:38:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3435072 00:08:37.714 22:38:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:37.714 22:38:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:37.714 22:38:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3435072' 00:08:37.714 killing process with pid 3435072 00:08:37.714 22:38:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 3435072 00:08:37.714 22:38:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 3435072 00:08:37.972 22:38:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:37.973 22:38:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:37.973 22:38:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:37.973 22:38:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:37.973 22:38:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:37.973 22:38:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.973 22:38:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:37.973 22:38:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.878 22:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:39.878 00:08:39.878 real 0m5.388s 00:08:39.878 user 0m4.583s 00:08:39.878 sys 0m1.757s 00:08:39.878 22:38:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:39.878 22:38:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:39.878 ************************************ 00:08:39.878 END TEST nvmf_target_discovery 00:08:39.878 ************************************ 00:08:40.208 22:38:32 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:40.208 22:38:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:40.208 22:38:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:40.208 22:38:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:40.208 ************************************ 00:08:40.208 START TEST nvmf_referrals 00:08:40.208 ************************************ 00:08:40.208 22:38:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:40.208 * Looking for test storage... 00:08:40.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:40.208 22:38:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:40.208 22:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:40.208 22:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:40.208 22:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:40.208 22:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:40.208 22:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:40.208 22:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:40.208 22:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:40.208 22:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:40.208 22:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:40.208 22:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:40.208 22:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:40.208 22:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:40.208 22:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:40.208 22:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:40.208 22:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:40.208 22:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:40.208 22:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:40.208 22:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:40.208 22:38:32 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.208 22:38:32 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.208 22:38:32 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.208 22:38:32 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.209 22:38:32 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.209 22:38:32 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.209 22:38:32 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:40.209 22:38:32 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.209 22:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:40.209 22:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:40.209 22:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:40.209 22:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:40.209 22:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:40.209 22:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:40.209 22:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:40.209 22:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:40.209 22:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:40.209 22:38:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:40.209 22:38:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:40.209 22:38:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:40.209 22:38:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:40.209 22:38:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:40.209 22:38:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:40.209 22:38:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:40.209 22:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:40.209 22:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:40.209 22:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:40.209 22:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:40.209 22:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:40.209 22:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.209 22:38:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:40.209 22:38:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.209 22:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:40.209 22:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:40.209 22:38:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:40.209 22:38:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:42.126 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:42.126 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:42.126 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:42.126 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:42.126 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:42.126 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:42.126 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:42.126 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:42.126 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:42.126 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:42.126 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:42.126 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:42.126 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:42.126 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:42.126 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:42.126 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:42.126 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:42.126 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:42.126 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:42.126 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:42.126 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:42.126 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:42.126 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:42.126 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:42.126 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:42.126 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:42.126 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:42.126 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:42.126 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:42.126 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:42.126 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:42.126 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:42.127 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:42.127 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:42.127 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:42.127 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:42.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:42.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:08:42.127 00:08:42.127 --- 10.0.0.2 ping statistics --- 00:08:42.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.127 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:42.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:42.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:08:42.127 00:08:42.127 --- 10.0.0.1 ping statistics --- 00:08:42.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.127 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3437162 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3437162 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 3437162 ']' 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:42.127 22:38:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:42.386 [2024-07-26 22:38:34.663017] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:42.386 [2024-07-26 22:38:34.663096] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.386 EAL: No free 2048 kB hugepages reported on node 1 00:08:42.386 [2024-07-26 22:38:34.736881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:42.386 [2024-07-26 22:38:34.837621] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:42.386 [2024-07-26 22:38:34.837688] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:42.386 [2024-07-26 22:38:34.837705] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:42.386 [2024-07-26 22:38:34.837718] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:42.386 [2024-07-26 22:38:34.837730] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:42.386 [2024-07-26 22:38:34.837814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.386 [2024-07-26 22:38:34.837880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:42.386 [2024-07-26 22:38:34.837904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:42.386 [2024-07-26 22:38:34.837907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.644 22:38:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:42.644 22:38:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:08:42.644 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:42.644 22:38:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:42.644 22:38:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:42.644 22:38:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:42.644 22:38:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:42.644 22:38:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.644 22:38:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:42.644 [2024-07-26 22:38:34.997070] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:42.644 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.644 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:42.644 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.644 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:42.644 [2024-07-26 22:38:35.009362] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:42.644 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.644 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:42.644 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.644 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:42.644 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.644 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:42.644 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.644 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:42.644 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.644 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:42.644 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.644 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:42.644 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.644 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:42.644 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:42.644 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.644 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:42.645 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.645 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:42.645 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:42.645 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:42.645 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:42.645 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:42.645 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.645 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:42.645 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:42.645 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.645 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:42.645 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:42.645 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:42.645 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:42.645 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:42.645 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:42.645 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:42.645 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:42.903 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:42.903 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:42.903 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:42.903 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.903 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:42.903 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.903 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:42.903 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.903 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:42.903 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.903 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:42.903 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.903 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:42.903 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.903 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:42.903 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:42.903 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.903 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:42.903 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.903 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:42.903 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:42.903 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:42.903 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:42.903 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:42.903 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:42.903 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:43.160 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:43.160 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:43.160 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:43.160 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.160 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:43.160 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.161 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:43.161 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.161 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:43.161 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.161 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:43.161 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:43.161 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:43.161 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:43.161 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.161 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:43.161 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:43.161 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.161 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:43.161 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:43.161 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:43.161 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:43.161 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:43.161 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:43.161 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:43.161 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:43.418 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:43.418 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:43.418 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:43.418 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:43.418 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:43.418 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:43.418 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:43.418 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:43.418 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:43.418 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:43.418 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:43.418 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:43.418 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:43.418 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:43.418 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:43.418 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.418 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:43.418 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.418 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:43.418 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:43.418 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:43.418 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:43.418 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.418 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:43.418 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:43.418 22:38:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.418 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:43.419 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:43.419 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:43.419 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:43.419 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:43.419 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:43.419 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:43.419 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:43.676 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:43.676 22:38:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:43.676 22:38:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:43.676 22:38:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:43.676 22:38:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:43.676 22:38:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:43.676 22:38:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:43.676 22:38:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:43.676 22:38:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:43.676 22:38:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:43.676 22:38:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:43.676 22:38:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:43.676 22:38:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:43.934 22:38:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:43.934 22:38:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:43.934 22:38:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.934 22:38:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:43.934 22:38:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.934 22:38:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:43.934 22:38:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:43.934 22:38:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.934 22:38:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:43.934 22:38:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.934 22:38:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:43.934 22:38:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:43.934 22:38:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:43.934 22:38:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:43.934 22:38:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:43.934 22:38:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:43.934 22:38:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:44.192 22:38:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:44.192 22:38:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:44.192 22:38:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:44.192 22:38:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:44.192 22:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:44.192 22:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:44.192 22:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:44.192 22:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:44.192 22:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:44.192 22:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:44.192 rmmod nvme_tcp 00:08:44.192 rmmod nvme_fabrics 00:08:44.192 rmmod nvme_keyring 00:08:44.192 22:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:44.192 22:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:44.192 22:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:44.192 22:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3437162 ']' 00:08:44.192 22:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3437162 00:08:44.192 22:38:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 3437162 ']' 00:08:44.192 22:38:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 3437162 00:08:44.192 22:38:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:08:44.192 22:38:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:44.192 22:38:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3437162 00:08:44.192 22:38:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:44.192 22:38:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:44.192 22:38:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3437162' 00:08:44.192 killing process with pid 3437162 00:08:44.192 22:38:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 3437162 00:08:44.192 22:38:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 3437162 00:08:44.451 22:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:44.451 22:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:44.451 22:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:44.451 22:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:44.451 22:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:44.451 22:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.451 22:38:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:44.451 22:38:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.357 22:38:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:46.357 00:08:46.357 real 0m6.403s 00:08:46.357 user 0m9.089s 00:08:46.357 sys 0m2.117s 00:08:46.357 22:38:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:46.357 22:38:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:46.357 ************************************ 00:08:46.357 END TEST nvmf_referrals 00:08:46.357 ************************************ 00:08:46.357 22:38:38 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:46.357 22:38:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:46.357 22:38:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:46.357 22:38:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:46.616 ************************************ 00:08:46.616 START TEST nvmf_connect_disconnect 00:08:46.616 ************************************ 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:46.616 * Looking for test storage... 00:08:46.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:46.616 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:46.617 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:46.617 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.617 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:46.617 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.617 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:46.617 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:46.617 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:46.617 22:38:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:48.519 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:48.519 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:48.519 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:48.519 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:48.519 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:48.520 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:48.520 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:48.520 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:48.520 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:48.520 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:48.520 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:48.520 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:48.520 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:48.520 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:48.520 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:48.520 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:48.520 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:48.520 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:48.520 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:48.520 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:48.520 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:48.520 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:48.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:08:48.520 00:08:48.520 --- 10.0.0.2 ping statistics --- 00:08:48.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.520 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:08:48.520 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:48.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:08:48.520 00:08:48.520 --- 10.0.0.1 ping statistics --- 00:08:48.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.520 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:08:48.520 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.520 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:48.520 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:48.520 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.520 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:48.520 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:48.520 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.520 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:48.520 22:38:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:48.520 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:48.520 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:48.520 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:48.520 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:48.520 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3439455 00:08:48.520 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:48.520 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3439455 00:08:48.520 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 3439455 ']' 00:08:48.520 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.520 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:48.520 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.520 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:48.520 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:48.777 [2024-07-26 22:38:41.064986] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:48.777 [2024-07-26 22:38:41.065095] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.777 EAL: No free 2048 kB hugepages reported on node 1 00:08:48.777 [2024-07-26 22:38:41.133630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:48.777 [2024-07-26 22:38:41.227222] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.777 [2024-07-26 22:38:41.227288] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.777 [2024-07-26 22:38:41.227304] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:48.777 [2024-07-26 22:38:41.227318] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:48.777 [2024-07-26 22:38:41.227331] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.777 [2024-07-26 22:38:41.227388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.777 [2024-07-26 22:38:41.227431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:48.777 [2024-07-26 22:38:41.227508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:48.777 [2024-07-26 22:38:41.227511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.033 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:49.033 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:08:49.033 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:49.033 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:49.033 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:49.033 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.033 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:49.033 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.033 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:49.033 [2024-07-26 22:38:41.397025] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.034 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.034 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:49.034 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.034 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:49.034 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.034 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:49.034 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:49.034 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.034 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:49.034 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.034 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:49.034 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.034 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:49.034 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.034 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:49.034 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.034 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:49.034 [2024-07-26 22:38:41.452130] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:49.034 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.034 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:49.034 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:49.034 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:49.034 22:38:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:51.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.499 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.950 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.366 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.259 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.241 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.704 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.029 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.977 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.991 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.743 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.167 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.343 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.869 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.743 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.241 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.506 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.583 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.950 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.367 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.827 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.252 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.724 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.724 22:42:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:38.724 22:42:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:38.724 22:42:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:38.724 22:42:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:38.724 22:42:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:38.724 22:42:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:38.724 22:42:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:38.724 22:42:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:38.724 rmmod nvme_tcp 00:12:38.724 rmmod nvme_fabrics 00:12:38.724 rmmod nvme_keyring 00:12:38.724 22:42:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:38.724 22:42:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:38.724 22:42:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:38.724 22:42:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3439455 ']' 00:12:38.724 22:42:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3439455 00:12:38.724 22:42:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 3439455 ']' 00:12:38.724 22:42:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 3439455 00:12:38.724 22:42:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:12:38.724 22:42:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:38.724 22:42:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3439455 00:12:38.724 22:42:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:38.724 22:42:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:38.724 22:42:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3439455' 00:12:38.724 killing process with pid 3439455 00:12:38.724 22:42:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 3439455 00:12:38.724 22:42:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 3439455 00:12:38.724 22:42:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:38.724 22:42:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:38.724 22:42:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:38.724 22:42:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:38.724 22:42:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:38.724 22:42:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.724 22:42:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:38.724 22:42:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.258 22:42:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:41.258 00:12:41.258 real 3m54.367s 00:12:41.258 user 14m53.032s 00:12:41.258 sys 0m33.849s 00:12:41.258 22:42:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:41.258 22:42:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:41.258 ************************************ 00:12:41.258 END TEST nvmf_connect_disconnect 00:12:41.258 ************************************ 00:12:41.258 22:42:33 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:41.258 22:42:33 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:41.258 22:42:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:41.258 22:42:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:41.258 ************************************ 00:12:41.258 START TEST nvmf_multitarget 00:12:41.258 ************************************ 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:41.258 * Looking for test storage... 00:12:41.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:41.258 22:42:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:43.158 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:43.158 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:43.158 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:43.158 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:43.158 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:43.158 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:43.158 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:43.158 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:43.158 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:43.158 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:43.158 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:43.158 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:43.158 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:43.158 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:43.158 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:43.158 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:43.158 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:43.158 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:43.158 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:43.158 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:43.158 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:43.158 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:43.158 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:43.158 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:43.158 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:43.158 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:43.158 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:43.158 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:43.158 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:43.158 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:43.158 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:43.158 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:43.159 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:43.159 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:43.159 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:43.159 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:43.159 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:43.159 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:12:43.159 00:12:43.159 --- 10.0.0.2 ping statistics --- 00:12:43.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.159 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:43.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:43.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:12:43.159 00:12:43.159 --- 10.0.0.1 ping statistics --- 00:12:43.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.159 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3470912 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3470912 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 3470912 ']' 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:43.159 22:42:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:43.159 [2024-07-26 22:42:35.538874] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:12:43.159 [2024-07-26 22:42:35.538959] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:43.159 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.159 [2024-07-26 22:42:35.604639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:43.417 [2024-07-26 22:42:35.695872] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:43.417 [2024-07-26 22:42:35.695922] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:43.417 [2024-07-26 22:42:35.695935] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:43.417 [2024-07-26 22:42:35.695945] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:43.417 [2024-07-26 22:42:35.695955] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:43.417 [2024-07-26 22:42:35.696036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:43.417 [2024-07-26 22:42:35.696102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:43.417 [2024-07-26 22:42:35.696166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:43.417 [2024-07-26 22:42:35.696168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.417 22:42:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:43.417 22:42:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:12:43.417 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:43.417 22:42:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:43.417 22:42:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:43.417 22:42:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:43.417 22:42:35 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:43.417 22:42:35 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:43.417 22:42:35 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:43.674 22:42:35 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:43.674 22:42:35 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:43.674 "nvmf_tgt_1" 00:12:43.674 22:42:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:43.674 "nvmf_tgt_2" 00:12:43.931 22:42:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:43.931 22:42:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:43.931 22:42:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:43.931 22:42:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:43.931 true 00:12:43.931 22:42:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:44.188 true 00:12:44.188 22:42:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:44.188 22:42:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:44.188 22:42:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:44.188 22:42:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:44.188 22:42:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:44.188 22:42:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:44.188 22:42:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:44.188 22:42:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:44.188 22:42:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:44.188 22:42:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:44.188 22:42:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:44.188 rmmod nvme_tcp 00:12:44.188 rmmod nvme_fabrics 00:12:44.188 rmmod nvme_keyring 00:12:44.188 22:42:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:44.188 22:42:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:44.188 22:42:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:44.188 22:42:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3470912 ']' 00:12:44.188 22:42:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3470912 00:12:44.188 22:42:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 3470912 ']' 00:12:44.188 22:42:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 3470912 00:12:44.188 22:42:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:12:44.188 22:42:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:44.188 22:42:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3470912 00:12:44.188 22:42:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:44.188 22:42:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:44.188 22:42:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3470912' 00:12:44.188 killing process with pid 3470912 00:12:44.188 22:42:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 3470912 00:12:44.188 22:42:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 3470912 00:12:44.446 22:42:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:44.446 22:42:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:44.446 22:42:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:44.446 22:42:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:44.446 22:42:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:44.446 22:42:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.446 22:42:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:44.446 22:42:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.009 22:42:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:47.009 00:12:47.009 real 0m5.652s 00:12:47.009 user 0m6.243s 00:12:47.009 sys 0m1.882s 00:12:47.009 22:42:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:47.009 22:42:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:47.009 ************************************ 00:12:47.009 END TEST nvmf_multitarget 00:12:47.009 ************************************ 00:12:47.009 22:42:38 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:47.009 22:42:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:47.009 22:42:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:47.009 22:42:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:47.009 ************************************ 00:12:47.009 START TEST nvmf_rpc 00:12:47.009 ************************************ 00:12:47.009 22:42:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:47.009 * Looking for test storage... 00:12:47.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:47.009 22:42:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:47.009 22:42:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:47.009 22:42:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:47.009 22:42:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:47.009 22:42:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:47.010 22:42:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:48.914 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:48.914 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:48.914 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:48.914 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:48.914 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:48.915 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:48.915 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:48.915 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:48.915 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:48.915 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:48.915 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:48.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:48.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:12:48.915 00:12:48.915 --- 10.0.0.2 ping statistics --- 00:12:48.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.915 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:12:48.915 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:48.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:48.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:12:48.915 00:12:48.915 --- 10.0.0.1 ping statistics --- 00:12:48.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.915 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:12:48.915 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:48.915 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:48.915 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:48.915 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:48.915 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:48.915 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:48.915 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:48.915 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:48.915 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:48.915 22:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:48.915 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:48.915 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:48.915 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.915 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3473011 00:12:48.915 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:48.915 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3473011 00:12:48.915 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 3473011 ']' 00:12:48.915 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.915 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:48.915 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.915 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:48.915 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.915 [2024-07-26 22:42:41.332899] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:12:48.915 [2024-07-26 22:42:41.332997] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:48.915 EAL: No free 2048 kB hugepages reported on node 1 00:12:48.915 [2024-07-26 22:42:41.412942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:49.173 [2024-07-26 22:42:41.511333] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:49.173 [2024-07-26 22:42:41.511394] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:49.173 [2024-07-26 22:42:41.511411] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:49.173 [2024-07-26 22:42:41.511425] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:49.173 [2024-07-26 22:42:41.511437] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:49.173 [2024-07-26 22:42:41.511530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:49.173 [2024-07-26 22:42:41.511560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:49.173 [2024-07-26 22:42:41.511615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:49.173 [2024-07-26 22:42:41.511619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.173 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:49.173 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:49.173 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:49.173 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:49.173 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.173 22:42:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:49.173 22:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:49.173 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.173 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.173 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.173 22:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:49.173 "tick_rate": 2700000000, 00:12:49.173 "poll_groups": [ 00:12:49.173 { 00:12:49.173 "name": "nvmf_tgt_poll_group_000", 00:12:49.173 "admin_qpairs": 0, 00:12:49.173 "io_qpairs": 0, 00:12:49.173 "current_admin_qpairs": 0, 00:12:49.173 "current_io_qpairs": 0, 00:12:49.173 "pending_bdev_io": 0, 00:12:49.173 "completed_nvme_io": 0, 00:12:49.173 "transports": [] 00:12:49.173 }, 00:12:49.173 { 00:12:49.173 "name": "nvmf_tgt_poll_group_001", 00:12:49.173 "admin_qpairs": 0, 00:12:49.173 "io_qpairs": 0, 00:12:49.173 "current_admin_qpairs": 0, 00:12:49.173 "current_io_qpairs": 0, 00:12:49.173 "pending_bdev_io": 0, 00:12:49.173 "completed_nvme_io": 0, 00:12:49.173 "transports": [] 00:12:49.173 }, 00:12:49.173 { 00:12:49.173 "name": "nvmf_tgt_poll_group_002", 00:12:49.173 "admin_qpairs": 0, 00:12:49.173 "io_qpairs": 0, 00:12:49.173 "current_admin_qpairs": 0, 00:12:49.173 "current_io_qpairs": 0, 00:12:49.173 "pending_bdev_io": 0, 00:12:49.173 "completed_nvme_io": 0, 00:12:49.173 "transports": [] 00:12:49.173 }, 00:12:49.173 { 00:12:49.173 "name": "nvmf_tgt_poll_group_003", 00:12:49.173 "admin_qpairs": 0, 00:12:49.173 "io_qpairs": 0, 00:12:49.173 "current_admin_qpairs": 0, 00:12:49.173 "current_io_qpairs": 0, 00:12:49.173 "pending_bdev_io": 0, 00:12:49.173 "completed_nvme_io": 0, 00:12:49.173 "transports": [] 00:12:49.173 } 00:12:49.173 ] 00:12:49.173 }' 00:12:49.173 22:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:49.174 22:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.432 [2024-07-26 22:42:41.761230] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:49.432 "tick_rate": 2700000000, 00:12:49.432 "poll_groups": [ 00:12:49.432 { 00:12:49.432 "name": "nvmf_tgt_poll_group_000", 00:12:49.432 "admin_qpairs": 0, 00:12:49.432 "io_qpairs": 0, 00:12:49.432 "current_admin_qpairs": 0, 00:12:49.432 "current_io_qpairs": 0, 00:12:49.432 "pending_bdev_io": 0, 00:12:49.432 "completed_nvme_io": 0, 00:12:49.432 "transports": [ 00:12:49.432 { 00:12:49.432 "trtype": "TCP" 00:12:49.432 } 00:12:49.432 ] 00:12:49.432 }, 00:12:49.432 { 00:12:49.432 "name": "nvmf_tgt_poll_group_001", 00:12:49.432 "admin_qpairs": 0, 00:12:49.432 "io_qpairs": 0, 00:12:49.432 "current_admin_qpairs": 0, 00:12:49.432 "current_io_qpairs": 0, 00:12:49.432 "pending_bdev_io": 0, 00:12:49.432 "completed_nvme_io": 0, 00:12:49.432 "transports": [ 00:12:49.432 { 00:12:49.432 "trtype": "TCP" 00:12:49.432 } 00:12:49.432 ] 00:12:49.432 }, 00:12:49.432 { 00:12:49.432 "name": "nvmf_tgt_poll_group_002", 00:12:49.432 "admin_qpairs": 0, 00:12:49.432 "io_qpairs": 0, 00:12:49.432 "current_admin_qpairs": 0, 00:12:49.432 "current_io_qpairs": 0, 00:12:49.432 "pending_bdev_io": 0, 00:12:49.432 "completed_nvme_io": 0, 00:12:49.432 "transports": [ 00:12:49.432 { 00:12:49.432 "trtype": "TCP" 00:12:49.432 } 00:12:49.432 ] 00:12:49.432 }, 00:12:49.432 { 00:12:49.432 "name": "nvmf_tgt_poll_group_003", 00:12:49.432 "admin_qpairs": 0, 00:12:49.432 "io_qpairs": 0, 00:12:49.432 "current_admin_qpairs": 0, 00:12:49.432 "current_io_qpairs": 0, 00:12:49.432 "pending_bdev_io": 0, 00:12:49.432 "completed_nvme_io": 0, 00:12:49.432 "transports": [ 00:12:49.432 { 00:12:49.432 "trtype": "TCP" 00:12:49.432 } 00:12:49.432 ] 00:12:49.432 } 00:12:49.432 ] 00:12:49.432 }' 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.432 Malloc1 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.432 [2024-07-26 22:42:41.923190] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:49.432 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:49.690 [2024-07-26 22:42:41.945618] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:49.690 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:49.690 could not add new controller: failed to write to nvme-fabrics device 00:12:49.690 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:49.690 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:49.690 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:49.690 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:49.690 22:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:49.690 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.690 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.690 22:42:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.690 22:42:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:50.255 22:42:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:50.255 22:42:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:50.255 22:42:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:50.255 22:42:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:50.255 22:42:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:52.779 22:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:52.779 22:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:52.779 22:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:52.779 22:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:52.779 22:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:52.779 22:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:52.779 22:42:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:52.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.779 22:42:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:52.779 22:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:52.779 22:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:52.779 22:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.779 22:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:52.779 22:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.779 22:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:52.779 22:42:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:52.779 22:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.779 22:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.779 22:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.779 22:42:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:52.779 22:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:52.780 22:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:52.780 22:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:52.780 22:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:52.780 22:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:52.780 22:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:52.780 22:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:52.780 22:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:52.780 22:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:52.780 22:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:52.780 22:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:52.780 [2024-07-26 22:42:44.820177] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:52.780 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:52.780 could not add new controller: failed to write to nvme-fabrics device 00:12:52.780 22:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:52.780 22:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:52.780 22:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:52.780 22:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:52.780 22:42:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:52.780 22:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.780 22:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.780 22:42:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.780 22:42:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.038 22:42:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:53.038 22:42:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:53.038 22:42:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:53.038 22:42:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:53.038 22:42:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:55.564 22:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:55.564 22:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:55.564 22:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:55.564 22:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:55.564 22:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:55.564 22:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:55.564 22:42:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:55.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.564 22:42:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:55.564 22:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:55.564 22:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:55.564 22:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.564 22:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:55.564 22:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.564 22:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:55.564 22:42:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:55.564 22:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.564 22:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.564 22:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.564 22:42:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:55.564 22:42:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:55.564 22:42:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:55.564 22:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.564 22:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.564 22:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.564 22:42:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.564 22:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.564 22:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.564 [2024-07-26 22:42:47.651739] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.564 22:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.564 22:42:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:55.564 22:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.564 22:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.564 22:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.564 22:42:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:55.564 22:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.564 22:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.564 22:42:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.564 22:42:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.822 22:42:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:55.822 22:42:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:55.822 22:42:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:55.822 22:42:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:55.822 22:42:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:58.347 22:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:58.347 22:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:58.347 22:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:58.347 22:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:58.347 22:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:58.347 22:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:58.347 22:42:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:58.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.347 22:42:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:58.347 22:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:58.347 22:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:58.347 22:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.347 22:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:58.347 22:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.348 22:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:58.348 22:42:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:58.348 22:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.348 22:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.348 22:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.348 22:42:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.348 22:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.348 22:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.348 22:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.348 22:42:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:58.348 22:42:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:58.348 22:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.348 22:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.348 22:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.348 22:42:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.348 22:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.348 22:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.348 [2024-07-26 22:42:50.415950] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.348 22:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.348 22:42:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:58.348 22:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.348 22:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.348 22:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.348 22:42:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:58.348 22:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.348 22:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.348 22:42:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.348 22:42:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.604 22:42:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:58.605 22:42:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:58.605 22:42:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:58.605 22:42:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:58.605 22:42:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.129 [2024-07-26 22:42:53.219444] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.129 22:42:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:01.692 22:42:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:01.692 22:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:01.692 22:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:01.692 22:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:01.692 22:42:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:03.586 22:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:03.586 22:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:03.586 22:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:03.586 22:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:03.586 22:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.586 22:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:03.586 22:42:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:03.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.586 22:42:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:03.586 22:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:03.586 22:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:03.586 22:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.586 22:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:03.586 22:42:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.586 22:42:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:03.586 22:42:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:03.586 22:42:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.586 22:42:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.586 22:42:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.586 22:42:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:03.586 22:42:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.586 22:42:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.586 22:42:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.586 22:42:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:03.586 22:42:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:03.586 22:42:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.586 22:42:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.586 22:42:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.586 22:42:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.586 22:42:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.586 22:42:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.586 [2024-07-26 22:42:56.035308] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.586 22:42:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.586 22:42:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:03.586 22:42:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.586 22:42:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.586 22:42:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.586 22:42:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:03.586 22:42:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.586 22:42:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.586 22:42:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.586 22:42:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:04.517 22:42:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:04.517 22:42:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:04.517 22:42:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:04.517 22:42:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:04.517 22:42:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:06.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.414 [2024-07-26 22:42:58.882909] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.414 22:42:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:07.347 22:42:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:07.347 22:42:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:07.347 22:42:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:07.347 22:42:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:07.347 22:42:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:09.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.282 [2024-07-26 22:43:01.696225] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.282 [2024-07-26 22:43:01.744262] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.282 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.283 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.283 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.283 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.283 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.283 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:09.283 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.283 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.283 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.283 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:09.283 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:09.283 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.283 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.541 [2024-07-26 22:43:01.792465] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.541 [2024-07-26 22:43:01.840609] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.541 [2024-07-26 22:43:01.888761] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.541 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:09.541 "tick_rate": 2700000000, 00:13:09.541 "poll_groups": [ 00:13:09.541 { 00:13:09.541 "name": "nvmf_tgt_poll_group_000", 00:13:09.541 "admin_qpairs": 2, 00:13:09.541 "io_qpairs": 84, 00:13:09.542 "current_admin_qpairs": 0, 00:13:09.542 "current_io_qpairs": 0, 00:13:09.542 "pending_bdev_io": 0, 00:13:09.542 "completed_nvme_io": 232, 00:13:09.542 "transports": [ 00:13:09.542 { 00:13:09.542 "trtype": "TCP" 00:13:09.542 } 00:13:09.542 ] 00:13:09.542 }, 00:13:09.542 { 00:13:09.542 "name": "nvmf_tgt_poll_group_001", 00:13:09.542 "admin_qpairs": 2, 00:13:09.542 "io_qpairs": 84, 00:13:09.542 "current_admin_qpairs": 0, 00:13:09.542 "current_io_qpairs": 0, 00:13:09.542 "pending_bdev_io": 0, 00:13:09.542 "completed_nvme_io": 135, 00:13:09.542 "transports": [ 00:13:09.542 { 00:13:09.542 "trtype": "TCP" 00:13:09.542 } 00:13:09.542 ] 00:13:09.542 }, 00:13:09.542 { 00:13:09.542 "name": "nvmf_tgt_poll_group_002", 00:13:09.542 "admin_qpairs": 1, 00:13:09.542 "io_qpairs": 84, 00:13:09.542 "current_admin_qpairs": 0, 00:13:09.542 "current_io_qpairs": 0, 00:13:09.542 "pending_bdev_io": 0, 00:13:09.542 "completed_nvme_io": 184, 00:13:09.542 "transports": [ 00:13:09.542 { 00:13:09.542 "trtype": "TCP" 00:13:09.542 } 00:13:09.542 ] 00:13:09.542 }, 00:13:09.542 { 00:13:09.542 "name": "nvmf_tgt_poll_group_003", 00:13:09.542 "admin_qpairs": 2, 00:13:09.542 "io_qpairs": 84, 00:13:09.542 "current_admin_qpairs": 0, 00:13:09.542 "current_io_qpairs": 0, 00:13:09.542 "pending_bdev_io": 0, 00:13:09.542 "completed_nvme_io": 135, 00:13:09.542 "transports": [ 00:13:09.542 { 00:13:09.542 "trtype": "TCP" 00:13:09.542 } 00:13:09.542 ] 00:13:09.542 } 00:13:09.542 ] 00:13:09.542 }' 00:13:09.542 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:09.542 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:09.542 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:09.542 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:09.542 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:09.542 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:09.542 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:09.542 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:09.542 22:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:09.542 22:43:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:13:09.542 22:43:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:09.542 22:43:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:09.542 22:43:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:09.542 22:43:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:09.542 22:43:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:13:09.542 22:43:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:09.542 22:43:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:13:09.542 22:43:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:09.542 22:43:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:09.542 rmmod nvme_tcp 00:13:09.800 rmmod nvme_fabrics 00:13:09.800 rmmod nvme_keyring 00:13:09.800 22:43:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:09.800 22:43:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:13:09.800 22:43:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:13:09.800 22:43:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3473011 ']' 00:13:09.800 22:43:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3473011 00:13:09.800 22:43:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 3473011 ']' 00:13:09.800 22:43:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 3473011 00:13:09.800 22:43:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:13:09.800 22:43:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:09.800 22:43:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3473011 00:13:09.800 22:43:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:09.800 22:43:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:09.800 22:43:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3473011' 00:13:09.800 killing process with pid 3473011 00:13:09.800 22:43:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 3473011 00:13:09.800 22:43:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 3473011 00:13:10.060 22:43:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:10.060 22:43:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:10.060 22:43:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:10.060 22:43:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:10.060 22:43:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:10.060 22:43:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.060 22:43:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:10.060 22:43:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.965 22:43:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:11.965 00:13:11.965 real 0m25.416s 00:13:11.965 user 1m22.600s 00:13:11.965 sys 0m4.188s 00:13:11.965 22:43:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:11.965 22:43:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.965 ************************************ 00:13:11.965 END TEST nvmf_rpc 00:13:11.965 ************************************ 00:13:11.965 22:43:04 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:11.965 22:43:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:11.965 22:43:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:11.965 22:43:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:11.965 ************************************ 00:13:11.965 START TEST nvmf_invalid 00:13:11.965 ************************************ 00:13:11.965 22:43:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:12.223 * Looking for test storage... 00:13:12.223 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:12.223 22:43:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:12.224 22:43:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.224 22:43:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:12.224 22:43:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:12.224 22:43:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:12.224 22:43:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:12.224 22:43:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:12.224 22:43:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:14.123 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:14.123 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:14.123 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:14.123 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:14.123 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:14.124 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:14.124 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:14.124 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:14.124 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:14.124 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:14.124 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:14.124 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:14.124 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:14.124 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:14.124 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:14.124 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:14.124 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:14.124 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:14.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:14.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:13:14.124 00:13:14.124 --- 10.0.0.2 ping statistics --- 00:13:14.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.124 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:13:14.124 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:14.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:14.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:13:14.124 00:13:14.124 --- 10.0.0.1 ping statistics --- 00:13:14.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.124 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:13:14.124 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:14.124 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:14.124 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:14.124 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:14.124 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:14.124 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:14.124 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:14.124 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:14.124 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:14.407 22:43:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:14.407 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:14.407 22:43:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:14.407 22:43:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:14.407 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3477563 00:13:14.407 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:14.407 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3477563 00:13:14.407 22:43:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 3477563 ']' 00:13:14.407 22:43:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.407 22:43:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:14.407 22:43:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.407 22:43:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:14.407 22:43:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:14.407 [2024-07-26 22:43:06.688572] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:13:14.407 [2024-07-26 22:43:06.688650] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:14.407 EAL: No free 2048 kB hugepages reported on node 1 00:13:14.407 [2024-07-26 22:43:06.756343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:14.408 [2024-07-26 22:43:06.848991] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:14.408 [2024-07-26 22:43:06.849055] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:14.408 [2024-07-26 22:43:06.849096] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:14.408 [2024-07-26 22:43:06.849110] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:14.408 [2024-07-26 22:43:06.849122] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:14.408 [2024-07-26 22:43:06.849217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.408 [2024-07-26 22:43:06.849276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:14.408 [2024-07-26 22:43:06.849304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:14.408 [2024-07-26 22:43:06.849307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.666 22:43:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:14.666 22:43:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:13:14.666 22:43:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:14.666 22:43:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:14.666 22:43:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:14.666 22:43:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:14.666 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:14.666 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode22450 00:13:14.923 [2024-07-26 22:43:07.285686] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:14.923 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:14.923 { 00:13:14.923 "nqn": "nqn.2016-06.io.spdk:cnode22450", 00:13:14.923 "tgt_name": "foobar", 00:13:14.923 "method": "nvmf_create_subsystem", 00:13:14.923 "req_id": 1 00:13:14.923 } 00:13:14.923 Got JSON-RPC error response 00:13:14.923 response: 00:13:14.923 { 00:13:14.923 "code": -32603, 00:13:14.923 "message": "Unable to find target foobar" 00:13:14.923 }' 00:13:14.923 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:14.923 { 00:13:14.923 "nqn": "nqn.2016-06.io.spdk:cnode22450", 00:13:14.923 "tgt_name": "foobar", 00:13:14.923 "method": "nvmf_create_subsystem", 00:13:14.923 "req_id": 1 00:13:14.924 } 00:13:14.924 Got JSON-RPC error response 00:13:14.924 response: 00:13:14.924 { 00:13:14.924 "code": -32603, 00:13:14.924 "message": "Unable to find target foobar" 00:13:14.924 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:14.924 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:14.924 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode16744 00:13:15.181 [2024-07-26 22:43:07.550594] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16744: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:15.181 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:15.181 { 00:13:15.181 "nqn": "nqn.2016-06.io.spdk:cnode16744", 00:13:15.181 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:15.181 "method": "nvmf_create_subsystem", 00:13:15.181 "req_id": 1 00:13:15.181 } 00:13:15.181 Got JSON-RPC error response 00:13:15.181 response: 00:13:15.181 { 00:13:15.181 "code": -32602, 00:13:15.181 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:15.181 }' 00:13:15.181 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:15.181 { 00:13:15.181 "nqn": "nqn.2016-06.io.spdk:cnode16744", 00:13:15.181 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:15.181 "method": "nvmf_create_subsystem", 00:13:15.181 "req_id": 1 00:13:15.181 } 00:13:15.181 Got JSON-RPC error response 00:13:15.181 response: 00:13:15.181 { 00:13:15.181 "code": -32602, 00:13:15.181 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:15.181 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:15.181 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:15.181 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode10277 00:13:15.439 [2024-07-26 22:43:07.819494] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10277: invalid model number 'SPDK_Controller' 00:13:15.439 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:15.439 { 00:13:15.439 "nqn": "nqn.2016-06.io.spdk:cnode10277", 00:13:15.439 "model_number": "SPDK_Controller\u001f", 00:13:15.439 "method": "nvmf_create_subsystem", 00:13:15.439 "req_id": 1 00:13:15.439 } 00:13:15.439 Got JSON-RPC error response 00:13:15.439 response: 00:13:15.439 { 00:13:15.439 "code": -32602, 00:13:15.439 "message": "Invalid MN SPDK_Controller\u001f" 00:13:15.439 }' 00:13:15.439 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:15.439 { 00:13:15.439 "nqn": "nqn.2016-06.io.spdk:cnode10277", 00:13:15.439 "model_number": "SPDK_Controller\u001f", 00:13:15.439 "method": "nvmf_create_subsystem", 00:13:15.439 "req_id": 1 00:13:15.439 } 00:13:15.439 Got JSON-RPC error response 00:13:15.439 response: 00:13:15.439 { 00:13:15.439 "code": -32602, 00:13:15.439 "message": "Invalid MN SPDK_Controller\u001f" 00:13:15.439 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:15.439 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:15.439 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:15.439 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:15.439 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:15.439 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ 8 == \- ]] 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '87g=i;zu04>c=q#0j7CEv' 00:13:15.440 22:43:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '87g=i;zu04>c=q#0j7CEv' nqn.2016-06.io.spdk:cnode13294 00:13:15.698 [2024-07-26 22:43:08.120520] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13294: invalid serial number '87g=i;zu04>c=q#0j7CEv' 00:13:15.698 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:15.698 { 00:13:15.698 "nqn": "nqn.2016-06.io.spdk:cnode13294", 00:13:15.698 "serial_number": "87g=i;zu04>c=q#0j7CEv", 00:13:15.698 "method": "nvmf_create_subsystem", 00:13:15.698 "req_id": 1 00:13:15.698 } 00:13:15.698 Got JSON-RPC error response 00:13:15.698 response: 00:13:15.698 { 00:13:15.698 "code": -32602, 00:13:15.698 "message": "Invalid SN 87g=i;zu04>c=q#0j7CEv" 00:13:15.698 }' 00:13:15.698 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:15.698 { 00:13:15.698 "nqn": "nqn.2016-06.io.spdk:cnode13294", 00:13:15.698 "serial_number": "87g=i;zu04>c=q#0j7CEv", 00:13:15.698 "method": "nvmf_create_subsystem", 00:13:15.698 "req_id": 1 00:13:15.698 } 00:13:15.698 Got JSON-RPC error response 00:13:15.698 response: 00:13:15.698 { 00:13:15.698 "code": -32602, 00:13:15.698 "message": "Invalid SN 87g=i;zu04>c=q#0j7CEv" 00:13:15.698 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:15.698 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:15.698 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:15.698 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:15.698 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:15.698 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:15.698 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.699 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:15.957 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ % == \- ]] 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '%$&=9oF <`+mQD`f.*uVg]1m5h+l@2Zy1xCXp0Vc' 00:13:15.958 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '%$&=9oF <`+mQD`f.*uVg]1m5h+l@2Zy1xCXp0Vc' nqn.2016-06.io.spdk:cnode20104 00:13:16.216 [2024-07-26 22:43:08.513769] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20104: invalid model number '%$&=9oF <`+mQD`f.*uVg]1m5h+l@2Zy1xCXp0Vc' 00:13:16.216 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:16.216 { 00:13:16.216 "nqn": "nqn.2016-06.io.spdk:cnode20104", 00:13:16.216 "model_number": "%$&=9oF <`+mQD`f\u007f.*uVg]1m5h+l@2Zy1xCXp0Vc", 00:13:16.216 "method": "nvmf_create_subsystem", 00:13:16.216 "req_id": 1 00:13:16.216 } 00:13:16.216 Got JSON-RPC error response 00:13:16.216 response: 00:13:16.216 { 00:13:16.216 "code": -32602, 00:13:16.216 "message": "Invalid MN %$&=9oF <`+mQD`f\u007f.*uVg]1m5h+l@2Zy1xCXp0Vc" 00:13:16.216 }' 00:13:16.216 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:16.216 { 00:13:16.216 "nqn": "nqn.2016-06.io.spdk:cnode20104", 00:13:16.216 "model_number": "%$&=9oF <`+mQD`f\u007f.*uVg]1m5h+l@2Zy1xCXp0Vc", 00:13:16.216 "method": "nvmf_create_subsystem", 00:13:16.216 "req_id": 1 00:13:16.216 } 00:13:16.216 Got JSON-RPC error response 00:13:16.216 response: 00:13:16.216 { 00:13:16.216 "code": -32602, 00:13:16.216 "message": "Invalid MN %$&=9oF <`+mQD`f\u007f.*uVg]1m5h+l@2Zy1xCXp0Vc" 00:13:16.216 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:16.216 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:16.474 [2024-07-26 22:43:08.762675] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:16.474 22:43:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:16.731 22:43:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:16.731 22:43:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:16.731 22:43:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:16.731 22:43:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:16.731 22:43:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:16.989 [2024-07-26 22:43:09.276448] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:16.989 22:43:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:16.989 { 00:13:16.989 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:16.989 "listen_address": { 00:13:16.989 "trtype": "tcp", 00:13:16.989 "traddr": "", 00:13:16.989 "trsvcid": "4421" 00:13:16.989 }, 00:13:16.989 "method": "nvmf_subsystem_remove_listener", 00:13:16.989 "req_id": 1 00:13:16.989 } 00:13:16.989 Got JSON-RPC error response 00:13:16.989 response: 00:13:16.989 { 00:13:16.989 "code": -32602, 00:13:16.989 "message": "Invalid parameters" 00:13:16.989 }' 00:13:16.989 22:43:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:16.989 { 00:13:16.989 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:16.989 "listen_address": { 00:13:16.989 "trtype": "tcp", 00:13:16.989 "traddr": "", 00:13:16.989 "trsvcid": "4421" 00:13:16.989 }, 00:13:16.989 "method": "nvmf_subsystem_remove_listener", 00:13:16.989 "req_id": 1 00:13:16.989 } 00:13:16.989 Got JSON-RPC error response 00:13:16.989 response: 00:13:16.989 { 00:13:16.989 "code": -32602, 00:13:16.989 "message": "Invalid parameters" 00:13:16.989 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:16.989 22:43:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11813 -i 0 00:13:17.247 [2024-07-26 22:43:09.513165] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11813: invalid cntlid range [0-65519] 00:13:17.247 22:43:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:17.247 { 00:13:17.247 "nqn": "nqn.2016-06.io.spdk:cnode11813", 00:13:17.247 "min_cntlid": 0, 00:13:17.247 "method": "nvmf_create_subsystem", 00:13:17.247 "req_id": 1 00:13:17.247 } 00:13:17.247 Got JSON-RPC error response 00:13:17.247 response: 00:13:17.247 { 00:13:17.247 "code": -32602, 00:13:17.247 "message": "Invalid cntlid range [0-65519]" 00:13:17.247 }' 00:13:17.247 22:43:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:17.247 { 00:13:17.247 "nqn": "nqn.2016-06.io.spdk:cnode11813", 00:13:17.247 "min_cntlid": 0, 00:13:17.247 "method": "nvmf_create_subsystem", 00:13:17.247 "req_id": 1 00:13:17.247 } 00:13:17.247 Got JSON-RPC error response 00:13:17.247 response: 00:13:17.247 { 00:13:17.247 "code": -32602, 00:13:17.247 "message": "Invalid cntlid range [0-65519]" 00:13:17.247 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:17.247 22:43:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29900 -i 65520 00:13:17.504 [2024-07-26 22:43:09.766013] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29900: invalid cntlid range [65520-65519] 00:13:17.504 22:43:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:17.504 { 00:13:17.504 "nqn": "nqn.2016-06.io.spdk:cnode29900", 00:13:17.504 "min_cntlid": 65520, 00:13:17.504 "method": "nvmf_create_subsystem", 00:13:17.504 "req_id": 1 00:13:17.504 } 00:13:17.504 Got JSON-RPC error response 00:13:17.504 response: 00:13:17.504 { 00:13:17.504 "code": -32602, 00:13:17.504 "message": "Invalid cntlid range [65520-65519]" 00:13:17.504 }' 00:13:17.504 22:43:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:17.504 { 00:13:17.504 "nqn": "nqn.2016-06.io.spdk:cnode29900", 00:13:17.504 "min_cntlid": 65520, 00:13:17.505 "method": "nvmf_create_subsystem", 00:13:17.505 "req_id": 1 00:13:17.505 } 00:13:17.505 Got JSON-RPC error response 00:13:17.505 response: 00:13:17.505 { 00:13:17.505 "code": -32602, 00:13:17.505 "message": "Invalid cntlid range [65520-65519]" 00:13:17.505 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:17.505 22:43:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29162 -I 0 00:13:17.763 [2024-07-26 22:43:10.030962] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29162: invalid cntlid range [1-0] 00:13:17.763 22:43:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:17.763 { 00:13:17.763 "nqn": "nqn.2016-06.io.spdk:cnode29162", 00:13:17.763 "max_cntlid": 0, 00:13:17.763 "method": "nvmf_create_subsystem", 00:13:17.763 "req_id": 1 00:13:17.763 } 00:13:17.763 Got JSON-RPC error response 00:13:17.763 response: 00:13:17.763 { 00:13:17.763 "code": -32602, 00:13:17.763 "message": "Invalid cntlid range [1-0]" 00:13:17.763 }' 00:13:17.763 22:43:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:17.763 { 00:13:17.763 "nqn": "nqn.2016-06.io.spdk:cnode29162", 00:13:17.763 "max_cntlid": 0, 00:13:17.763 "method": "nvmf_create_subsystem", 00:13:17.763 "req_id": 1 00:13:17.763 } 00:13:17.763 Got JSON-RPC error response 00:13:17.763 response: 00:13:17.763 { 00:13:17.763 "code": -32602, 00:13:17.763 "message": "Invalid cntlid range [1-0]" 00:13:17.763 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:17.763 22:43:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12884 -I 65520 00:13:18.020 [2024-07-26 22:43:10.275742] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12884: invalid cntlid range [1-65520] 00:13:18.020 22:43:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:18.020 { 00:13:18.020 "nqn": "nqn.2016-06.io.spdk:cnode12884", 00:13:18.020 "max_cntlid": 65520, 00:13:18.020 "method": "nvmf_create_subsystem", 00:13:18.020 "req_id": 1 00:13:18.020 } 00:13:18.020 Got JSON-RPC error response 00:13:18.020 response: 00:13:18.020 { 00:13:18.020 "code": -32602, 00:13:18.020 "message": "Invalid cntlid range [1-65520]" 00:13:18.020 }' 00:13:18.020 22:43:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:18.020 { 00:13:18.020 "nqn": "nqn.2016-06.io.spdk:cnode12884", 00:13:18.020 "max_cntlid": 65520, 00:13:18.020 "method": "nvmf_create_subsystem", 00:13:18.020 "req_id": 1 00:13:18.020 } 00:13:18.020 Got JSON-RPC error response 00:13:18.020 response: 00:13:18.020 { 00:13:18.020 "code": -32602, 00:13:18.020 "message": "Invalid cntlid range [1-65520]" 00:13:18.020 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:18.020 22:43:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29809 -i 6 -I 5 00:13:18.279 [2024-07-26 22:43:10.528559] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29809: invalid cntlid range [6-5] 00:13:18.279 22:43:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:18.279 { 00:13:18.279 "nqn": "nqn.2016-06.io.spdk:cnode29809", 00:13:18.279 "min_cntlid": 6, 00:13:18.279 "max_cntlid": 5, 00:13:18.279 "method": "nvmf_create_subsystem", 00:13:18.279 "req_id": 1 00:13:18.279 } 00:13:18.279 Got JSON-RPC error response 00:13:18.279 response: 00:13:18.279 { 00:13:18.279 "code": -32602, 00:13:18.279 "message": "Invalid cntlid range [6-5]" 00:13:18.279 }' 00:13:18.279 22:43:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:18.279 { 00:13:18.279 "nqn": "nqn.2016-06.io.spdk:cnode29809", 00:13:18.279 "min_cntlid": 6, 00:13:18.279 "max_cntlid": 5, 00:13:18.279 "method": "nvmf_create_subsystem", 00:13:18.279 "req_id": 1 00:13:18.279 } 00:13:18.279 Got JSON-RPC error response 00:13:18.279 response: 00:13:18.279 { 00:13:18.279 "code": -32602, 00:13:18.279 "message": "Invalid cntlid range [6-5]" 00:13:18.279 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:18.279 22:43:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:18.279 22:43:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:18.279 { 00:13:18.279 "name": "foobar", 00:13:18.279 "method": "nvmf_delete_target", 00:13:18.279 "req_id": 1 00:13:18.279 } 00:13:18.279 Got JSON-RPC error response 00:13:18.279 response: 00:13:18.279 { 00:13:18.279 "code": -32602, 00:13:18.279 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:18.279 }' 00:13:18.279 22:43:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:18.279 { 00:13:18.279 "name": "foobar", 00:13:18.279 "method": "nvmf_delete_target", 00:13:18.279 "req_id": 1 00:13:18.279 } 00:13:18.279 Got JSON-RPC error response 00:13:18.279 response: 00:13:18.279 { 00:13:18.279 "code": -32602, 00:13:18.279 "message": "The specified target doesn't exist, cannot delete it." 00:13:18.279 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:18.279 22:43:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:18.279 22:43:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:18.279 22:43:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:18.279 22:43:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:18.279 22:43:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:18.279 22:43:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:18.279 22:43:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:18.279 22:43:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:18.279 rmmod nvme_tcp 00:13:18.279 rmmod nvme_fabrics 00:13:18.279 rmmod nvme_keyring 00:13:18.279 22:43:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:18.279 22:43:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:18.279 22:43:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:18.279 22:43:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3477563 ']' 00:13:18.279 22:43:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3477563 00:13:18.279 22:43:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 3477563 ']' 00:13:18.279 22:43:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 3477563 00:13:18.279 22:43:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:13:18.279 22:43:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:18.279 22:43:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3477563 00:13:18.279 22:43:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:18.279 22:43:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:18.279 22:43:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3477563' 00:13:18.279 killing process with pid 3477563 00:13:18.279 22:43:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 3477563 00:13:18.279 22:43:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 3477563 00:13:18.538 22:43:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:18.538 22:43:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:18.538 22:43:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:18.538 22:43:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:18.538 22:43:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:18.538 22:43:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.538 22:43:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:18.538 22:43:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.074 22:43:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:21.074 00:13:21.074 real 0m8.564s 00:13:21.074 user 0m20.056s 00:13:21.074 sys 0m2.389s 00:13:21.074 22:43:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:21.074 22:43:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:21.074 ************************************ 00:13:21.074 END TEST nvmf_invalid 00:13:21.074 ************************************ 00:13:21.074 22:43:13 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:21.074 22:43:13 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:21.074 22:43:13 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:21.074 22:43:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:21.074 ************************************ 00:13:21.074 START TEST nvmf_abort 00:13:21.074 ************************************ 00:13:21.074 22:43:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:21.074 * Looking for test storage... 00:13:21.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:21.074 22:43:13 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:21.074 22:43:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:13:21.074 22:43:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:21.074 22:43:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:21.074 22:43:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:21.074 22:43:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:21.074 22:43:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:21.074 22:43:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:21.074 22:43:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:21.074 22:43:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:21.074 22:43:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:21.074 22:43:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:21.074 22:43:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:21.074 22:43:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:21.074 22:43:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:21.074 22:43:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:21.074 22:43:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:21.074 22:43:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:21.074 22:43:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:21.074 22:43:13 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:21.074 22:43:13 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:21.074 22:43:13 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:21.074 22:43:13 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.074 22:43:13 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.074 22:43:13 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.074 22:43:13 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:13:21.075 22:43:13 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.075 22:43:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:13:21.075 22:43:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:21.075 22:43:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:21.075 22:43:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:21.075 22:43:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:21.075 22:43:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:21.075 22:43:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:21.075 22:43:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:21.075 22:43:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:21.075 22:43:13 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:21.075 22:43:13 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:21.075 22:43:13 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:13:21.075 22:43:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:21.075 22:43:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:21.075 22:43:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:21.075 22:43:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:21.075 22:43:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:21.075 22:43:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.075 22:43:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:21.075 22:43:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.075 22:43:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:21.075 22:43:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:21.075 22:43:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:13:21.075 22:43:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:22.976 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:22.976 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:22.976 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:22.976 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:22.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:22.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:13:22.976 00:13:22.976 --- 10.0.0.2 ping statistics --- 00:13:22.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.976 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:22.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:22.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:13:22.976 00:13:22.976 --- 10.0.0.1 ping statistics --- 00:13:22.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.976 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:13:22.976 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:22.977 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:22.977 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:22.977 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:22.977 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:22.977 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:22.977 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:22.977 22:43:15 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:22.977 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:22.977 22:43:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:22.977 22:43:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:22.977 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3480131 00:13:22.977 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:22.977 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3480131 00:13:22.977 22:43:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 3480131 ']' 00:13:22.977 22:43:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.977 22:43:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:22.977 22:43:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.977 22:43:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:22.977 22:43:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:22.977 [2024-07-26 22:43:15.383932] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:13:22.977 [2024-07-26 22:43:15.384019] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.977 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.977 [2024-07-26 22:43:15.453329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:23.235 [2024-07-26 22:43:15.547754] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:23.235 [2024-07-26 22:43:15.547820] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:23.235 [2024-07-26 22:43:15.547846] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:23.235 [2024-07-26 22:43:15.547860] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:23.235 [2024-07-26 22:43:15.547871] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:23.235 [2024-07-26 22:43:15.547970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.235 [2024-07-26 22:43:15.548025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:23.235 [2024-07-26 22:43:15.548028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.235 22:43:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:23.235 22:43:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:13:23.236 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:23.236 22:43:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:23.236 22:43:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:23.236 22:43:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:23.236 22:43:15 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:23.236 22:43:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.236 22:43:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:23.236 [2024-07-26 22:43:15.696680] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:23.236 22:43:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.236 22:43:15 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:23.236 22:43:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.236 22:43:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:23.236 Malloc0 00:13:23.236 22:43:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.236 22:43:15 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:23.236 22:43:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.236 22:43:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:23.494 Delay0 00:13:23.494 22:43:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.494 22:43:15 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:23.494 22:43:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.494 22:43:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:23.494 22:43:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.494 22:43:15 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:23.494 22:43:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.494 22:43:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:23.494 22:43:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.494 22:43:15 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:23.494 22:43:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.494 22:43:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:23.494 [2024-07-26 22:43:15.763142] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.494 22:43:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.494 22:43:15 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:23.494 22:43:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.494 22:43:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:23.494 22:43:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.494 22:43:15 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:23.494 EAL: No free 2048 kB hugepages reported on node 1 00:13:23.494 [2024-07-26 22:43:15.870648] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:26.025 Initializing NVMe Controllers 00:13:26.025 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:26.025 controller IO queue size 128 less than required 00:13:26.025 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:26.025 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:26.025 Initialization complete. Launching workers. 00:13:26.025 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 32177 00:13:26.025 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32238, failed to submit 62 00:13:26.025 success 32181, unsuccess 57, failed 0 00:13:26.025 22:43:17 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:26.025 22:43:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.025 22:43:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:26.025 22:43:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.025 22:43:17 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:26.025 22:43:17 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:13:26.025 22:43:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:26.025 22:43:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:13:26.025 22:43:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:26.025 22:43:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:13:26.025 22:43:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:26.025 22:43:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:26.025 rmmod nvme_tcp 00:13:26.025 rmmod nvme_fabrics 00:13:26.025 rmmod nvme_keyring 00:13:26.025 22:43:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:26.025 22:43:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:13:26.025 22:43:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:13:26.025 22:43:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3480131 ']' 00:13:26.025 22:43:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3480131 00:13:26.025 22:43:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 3480131 ']' 00:13:26.025 22:43:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 3480131 00:13:26.025 22:43:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:13:26.025 22:43:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:26.025 22:43:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3480131 00:13:26.025 22:43:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:26.025 22:43:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:26.025 22:43:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3480131' 00:13:26.025 killing process with pid 3480131 00:13:26.025 22:43:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 3480131 00:13:26.025 22:43:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 3480131 00:13:26.025 22:43:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:26.025 22:43:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:26.025 22:43:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:26.025 22:43:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:26.025 22:43:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:26.025 22:43:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.025 22:43:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:26.025 22:43:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.964 22:43:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:27.964 00:13:27.964 real 0m7.273s 00:13:27.964 user 0m10.475s 00:13:27.964 sys 0m2.559s 00:13:27.964 22:43:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:27.964 22:43:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:27.964 ************************************ 00:13:27.964 END TEST nvmf_abort 00:13:27.964 ************************************ 00:13:27.965 22:43:20 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:27.965 22:43:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:27.965 22:43:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:27.965 22:43:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:27.965 ************************************ 00:13:27.965 START TEST nvmf_ns_hotplug_stress 00:13:27.965 ************************************ 00:13:27.965 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:27.965 * Looking for test storage... 00:13:27.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:27.965 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:27.965 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:13:28.224 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:28.224 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:28.224 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:28.224 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:28.224 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:28.224 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:28.224 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:28.224 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:28.224 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:28.224 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:28.224 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:28.224 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:28.224 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:28.224 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:28.224 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:28.225 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:28.225 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:28.225 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:28.225 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:28.225 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:28.225 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.225 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.225 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.225 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:13:28.225 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.225 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:13:28.225 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:28.225 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:28.225 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:28.225 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:28.225 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:28.225 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:28.225 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:28.225 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:28.225 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:28.225 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:28.225 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:28.225 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:28.225 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:28.225 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:28.225 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:28.225 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.225 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:28.225 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.225 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:28.225 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:28.225 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:28.225 22:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:30.133 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:30.133 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:30.133 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.133 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:30.134 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:30.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:30.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:13:30.134 00:13:30.134 --- 10.0.0.2 ping statistics --- 00:13:30.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.134 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:30.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:30.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:13:30.134 00:13:30.134 --- 10.0.0.1 ping statistics --- 00:13:30.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.134 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3482468 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3482468 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 3482468 ']' 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:30.134 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.394 [2024-07-26 22:43:22.641661] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:13:30.394 [2024-07-26 22:43:22.641738] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:30.394 EAL: No free 2048 kB hugepages reported on node 1 00:13:30.394 [2024-07-26 22:43:22.715391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:30.394 [2024-07-26 22:43:22.808344] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.394 [2024-07-26 22:43:22.808412] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.394 [2024-07-26 22:43:22.808437] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:30.394 [2024-07-26 22:43:22.808451] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:30.394 [2024-07-26 22:43:22.808462] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.394 [2024-07-26 22:43:22.808556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:30.394 [2024-07-26 22:43:22.808613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:30.394 [2024-07-26 22:43:22.808617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.652 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:30.652 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:13:30.652 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:30.652 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:30.652 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.652 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:30.652 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:30.652 22:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:30.910 [2024-07-26 22:43:23.204674] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:30.910 22:43:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:31.168 22:43:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:31.426 [2024-07-26 22:43:23.707251] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:31.426 22:43:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:31.684 22:43:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:31.942 Malloc0 00:13:31.942 22:43:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:32.201 Delay0 00:13:32.201 22:43:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.460 22:43:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:32.460 NULL1 00:13:32.718 22:43:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:32.718 22:43:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3482768 00:13:32.718 22:43:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:32.718 22:43:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3482768 00:13:32.718 22:43:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.977 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.977 22:43:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.235 22:43:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:33.235 22:43:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:33.493 true 00:13:33.493 22:43:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3482768 00:13:33.494 22:43:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.753 22:43:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.012 22:43:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:34.012 22:43:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:34.270 true 00:13:34.270 22:43:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3482768 00:13:34.270 22:43:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.204 Read completed with error (sct=0, sc=11) 00:13:35.204 22:43:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.204 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:35.204 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:35.205 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:35.463 22:43:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:35.463 22:43:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:35.720 true 00:13:35.720 22:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3482768 00:13:35.720 22:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.979 22:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.237 22:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:36.237 22:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:36.495 true 00:13:36.495 22:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3482768 00:13:36.495 22:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.429 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:37.429 22:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.687 22:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:37.687 22:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:37.945 true 00:13:37.945 22:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3482768 00:13:37.945 22:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.203 22:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.461 22:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:38.461 22:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:38.719 true 00:13:38.719 22:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3482768 00:13:38.719 22:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:39.653 22:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.911 22:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:39.911 22:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:39.911 true 00:13:40.169 22:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3482768 00:13:40.169 22:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.428 22:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.686 22:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:40.686 22:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:40.686 true 00:13:40.686 22:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3482768 00:13:40.686 22:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.622 22:43:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.622 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.622 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.879 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.879 22:43:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:41.879 22:43:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:42.136 true 00:13:42.136 22:43:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3482768 00:13:42.136 22:43:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.395 22:43:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.652 22:43:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:42.652 22:43:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:42.909 true 00:13:42.909 22:43:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3482768 00:13:42.909 22:43:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:43.840 22:43:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:44.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:44.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:44.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:44.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:44.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:44.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:44.097 22:43:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:44.097 22:43:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:44.705 true 00:13:44.705 22:43:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3482768 00:13:44.705 22:43:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.270 22:43:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.527 22:43:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:45.527 22:43:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:45.783 true 00:13:45.783 22:43:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3482768 00:13:45.783 22:43:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.040 22:43:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.297 22:43:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:46.297 22:43:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:46.555 true 00:13:46.555 22:43:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3482768 00:13:46.555 22:43:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.487 22:43:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.487 22:43:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:47.487 22:43:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:47.745 true 00:13:47.745 22:43:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3482768 00:13:47.746 22:43:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.002 22:43:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.258 22:43:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:48.258 22:43:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:48.515 true 00:13:48.515 22:43:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3482768 00:13:48.515 22:43:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:49.446 22:43:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:49.703 22:43:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:49.703 22:43:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:49.960 true 00:13:49.960 22:43:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3482768 00:13:49.960 22:43:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.216 22:43:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.473 22:43:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:50.473 22:43:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:50.730 true 00:13:50.730 22:43:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3482768 00:13:50.730 22:43:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.660 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.660 22:43:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.660 22:43:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:51.660 22:43:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:51.917 true 00:13:51.917 22:43:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3482768 00:13:51.917 22:43:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.174 22:43:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.431 22:43:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:52.431 22:43:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:52.688 true 00:13:52.688 22:43:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3482768 00:13:52.689 22:43:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.621 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.621 22:43:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.185 22:43:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:54.185 22:43:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:54.185 true 00:13:54.185 22:43:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3482768 00:13:54.185 22:43:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.443 22:43:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.700 22:43:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:54.700 22:43:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:54.958 true 00:13:54.958 22:43:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3482768 00:13:54.958 22:43:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.890 22:43:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.890 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.890 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.890 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.890 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:56.147 22:43:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:56.147 22:43:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:56.405 true 00:13:56.405 22:43:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3482768 00:13:56.405 22:43:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.662 22:43:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.919 22:43:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:56.919 22:43:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:57.177 true 00:13:57.177 22:43:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3482768 00:13:57.177 22:43:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.109 22:43:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.367 22:43:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:58.367 22:43:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:58.624 true 00:13:58.624 22:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3482768 00:13:58.624 22:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.556 22:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.556 22:43:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:59.556 22:43:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:59.814 true 00:13:59.814 22:43:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3482768 00:13:59.814 22:43:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.072 22:43:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.329 22:43:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:14:00.329 22:43:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:00.617 true 00:14:00.617 22:43:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3482768 00:14:00.617 22:43:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.567 22:43:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.567 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.825 22:43:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:14:01.825 22:43:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:01.825 true 00:14:02.082 22:43:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3482768 00:14:02.082 22:43:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.082 22:43:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:02.340 22:43:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:14:02.340 22:43:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:02.597 true 00:14:02.597 22:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3482768 00:14:02.597 22:43:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.968 Initializing NVMe Controllers 00:14:03.968 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:03.968 Controller IO queue size 128, less than required. 00:14:03.968 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:03.968 Controller IO queue size 128, less than required. 00:14:03.968 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:03.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:03.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:03.968 Initialization complete. Launching workers. 00:14:03.968 ======================================================== 00:14:03.968 Latency(us) 00:14:03.968 Device Information : IOPS MiB/s Average min max 00:14:03.968 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1147.79 0.56 63387.78 2513.16 1032701.36 00:14:03.968 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11624.51 5.68 10979.11 2919.52 448379.31 00:14:03.968 ======================================================== 00:14:03.968 Total : 12772.30 6.24 15688.86 2513.16 1032701.36 00:14:03.968 00:14:03.968 22:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.968 22:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:14:03.968 22:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:14:04.225 true 00:14:04.225 22:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3482768 00:14:04.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3482768) - No such process 00:14:04.225 22:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3482768 00:14:04.225 22:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.483 22:43:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:04.740 22:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:14:04.740 22:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:14:04.740 22:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:14:04.740 22:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:04.740 22:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:04.998 null0 00:14:04.998 22:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:04.998 22:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:04.998 22:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:05.256 null1 00:14:05.256 22:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:05.256 22:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:05.256 22:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:05.256 null2 00:14:05.513 22:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:05.513 22:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:05.513 22:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:05.513 null3 00:14:05.513 22:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:05.513 22:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:05.513 22:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:05.771 null4 00:14:05.771 22:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:05.771 22:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:05.771 22:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:06.028 null5 00:14:06.028 22:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:06.028 22:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:06.028 22:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:06.286 null6 00:14:06.286 22:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:06.286 22:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:06.286 22:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:06.545 null7 00:14:06.545 22:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:06.545 22:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:06.545 22:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:06.545 22:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:06.545 22:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:06.545 22:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:06.545 22:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:06.545 22:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:06.545 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:06.546 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:06.546 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:06.546 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:06.546 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:06.546 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:06.546 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3486816 3486817 3486818 3486821 3486823 3486825 3486827 3486829 00:14:06.546 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.546 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:06.804 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:06.804 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:06.804 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:06.804 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:06.804 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:06.804 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.804 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:06.804 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:07.062 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.062 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.062 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:07.062 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.062 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.062 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:07.062 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.062 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.062 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:07.062 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.062 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.062 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:07.062 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.062 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.062 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:07.062 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.062 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.062 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:07.062 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.062 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.062 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:07.321 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.321 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.321 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:07.321 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:07.321 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:07.579 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:07.579 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:07.579 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.579 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:07.579 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:07.579 22:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:07.837 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.837 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.837 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:07.837 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.837 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.837 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:07.837 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.837 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.837 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:07.838 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.838 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.838 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.838 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.838 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:07.838 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:07.838 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.838 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.838 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:07.838 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.838 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.838 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:07.838 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.838 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.838 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:08.096 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:08.096 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:08.096 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:08.096 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:08.096 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:08.096 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:08.096 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.096 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:08.355 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.355 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.355 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:08.355 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.355 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.355 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:08.355 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.355 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.355 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:08.355 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.355 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.355 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:08.355 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.355 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.355 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:08.355 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.355 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.355 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:08.355 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.355 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.355 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:08.355 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.355 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.355 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:08.612 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:08.612 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:08.612 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:08.612 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:08.612 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:08.613 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.613 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:08.613 22:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:08.870 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.870 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.870 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:08.870 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.870 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.870 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:08.870 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.870 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.870 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:08.870 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.870 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.870 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:08.870 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.870 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.870 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:08.870 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.870 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.870 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:08.870 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.870 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.870 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:08.870 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.870 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.870 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:09.128 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:09.128 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:09.128 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:09.128 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:09.128 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:09.128 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:09.128 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.128 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:09.386 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.386 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.386 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:09.386 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.386 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.386 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.386 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:09.386 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.386 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:09.386 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.386 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.386 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:09.386 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.386 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.386 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:09.386 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.386 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.386 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:09.386 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.386 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.386 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:09.386 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.386 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.386 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:09.644 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.644 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:09.644 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:09.644 22:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:09.644 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:09.644 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:09.644 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:09.644 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:09.901 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.901 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.901 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:09.901 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.901 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.901 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:09.901 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.901 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.901 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:09.901 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.901 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.901 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.901 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.901 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:09.901 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:09.901 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.901 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.901 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:09.901 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.901 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.901 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:09.901 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.901 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.901 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:10.158 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:10.158 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:10.158 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:10.158 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:10.158 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.158 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:10.158 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:10.158 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:10.416 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.416 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.416 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:10.416 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.416 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.416 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:10.416 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.416 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.416 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.416 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:10.416 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.416 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:10.416 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.416 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.416 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:10.416 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.416 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.416 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.416 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:10.416 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.416 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:10.416 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.416 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.416 22:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:10.675 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:10.675 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:10.675 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:10.675 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:10.675 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.675 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:10.675 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:10.675 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:10.933 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.933 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.933 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:10.933 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.933 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.933 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:10.933 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.933 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.933 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:10.933 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.933 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.933 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:10.933 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.933 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.933 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.933 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.933 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:10.933 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:10.933 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.933 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.933 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:10.933 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.933 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.933 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:11.191 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:11.191 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:11.191 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:11.191 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:11.191 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:11.191 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.191 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:11.191 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:11.449 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.449 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.449 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:11.449 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.449 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.449 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:11.449 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.449 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.449 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:11.449 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.449 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.449 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:11.449 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.449 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.449 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:11.449 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.449 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.449 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:11.449 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.449 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.449 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:11.449 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.449 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.449 22:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:11.707 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:11.707 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:11.707 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:11.707 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:11.707 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:11.707 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:11.707 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:11.707 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.965 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.965 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.965 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.965 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.965 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.965 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.965 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.965 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.965 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.965 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.222 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.222 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.222 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.222 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.222 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.222 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.222 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:12.222 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:12.222 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:12.222 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:14:12.222 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:12.222 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:14:12.222 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:12.222 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:12.222 rmmod nvme_tcp 00:14:12.222 rmmod nvme_fabrics 00:14:12.222 rmmod nvme_keyring 00:14:12.222 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:12.222 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:14:12.222 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:14:12.222 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3482468 ']' 00:14:12.222 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3482468 00:14:12.222 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 3482468 ']' 00:14:12.222 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 3482468 00:14:12.222 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:14:12.222 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:12.222 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3482468 00:14:12.222 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:12.222 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:12.222 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3482468' 00:14:12.222 killing process with pid 3482468 00:14:12.222 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 3482468 00:14:12.222 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 3482468 00:14:12.482 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:12.482 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:12.482 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:12.482 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:12.482 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:12.482 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.482 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:12.482 22:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.382 22:44:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:14.382 00:14:14.382 real 0m46.439s 00:14:14.382 user 3m31.857s 00:14:14.382 sys 0m16.493s 00:14:14.382 22:44:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:14.382 22:44:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.382 ************************************ 00:14:14.382 END TEST nvmf_ns_hotplug_stress 00:14:14.382 ************************************ 00:14:14.382 22:44:06 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:14.382 22:44:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:14.382 22:44:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:14.382 22:44:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:14.382 ************************************ 00:14:14.382 START TEST nvmf_connect_stress 00:14:14.382 ************************************ 00:14:14.382 22:44:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:14.640 * Looking for test storage... 00:14:14.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:14.640 22:44:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:16.541 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:16.541 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:16.541 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:16.541 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:16.541 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:16.542 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:16.542 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:16.542 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:16.542 22:44:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:16.542 22:44:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:16.542 22:44:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:16.542 22:44:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:16.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:16.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:14:16.800 00:14:16.800 --- 10.0.0.2 ping statistics --- 00:14:16.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.800 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:14:16.800 22:44:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:16.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:16.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:14:16.800 00:14:16.800 --- 10.0.0.1 ping statistics --- 00:14:16.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.800 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:14:16.800 22:44:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:16.800 22:44:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:14:16.800 22:44:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:16.800 22:44:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:16.800 22:44:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:16.800 22:44:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:16.800 22:44:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:16.800 22:44:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:16.800 22:44:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:16.801 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:16.801 22:44:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:16.801 22:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:16.801 22:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.801 22:44:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3489566 00:14:16.801 22:44:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:16.801 22:44:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3489566 00:14:16.801 22:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 3489566 ']' 00:14:16.801 22:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.801 22:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:16.801 22:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.801 22:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:16.801 22:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.801 [2024-07-26 22:44:09.121977] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:16.801 [2024-07-26 22:44:09.122068] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.801 EAL: No free 2048 kB hugepages reported on node 1 00:14:16.801 [2024-07-26 22:44:09.188036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:16.801 [2024-07-26 22:44:09.274381] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.801 [2024-07-26 22:44:09.274435] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.801 [2024-07-26 22:44:09.274459] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:16.801 [2024-07-26 22:44:09.274471] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:16.801 [2024-07-26 22:44:09.274481] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.801 [2024-07-26 22:44:09.274550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:16.801 [2024-07-26 22:44:09.274608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:16.801 [2024-07-26 22:44:09.274612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.066 [2024-07-26 22:44:09.418380] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.066 [2024-07-26 22:44:09.452233] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.066 NULL1 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3489594 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:17.066 EAL: No free 2048 kB hugepages reported on node 1 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3489594 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.066 22:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.385 22:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.385 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3489594 00:14:17.385 22:44:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.385 22:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.385 22:44:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.950 22:44:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.950 22:44:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3489594 00:14:17.950 22:44:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.950 22:44:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.950 22:44:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.208 22:44:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.208 22:44:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3489594 00:14:18.208 22:44:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.208 22:44:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.208 22:44:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.466 22:44:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.466 22:44:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3489594 00:14:18.466 22:44:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.466 22:44:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.466 22:44:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.723 22:44:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.723 22:44:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3489594 00:14:18.723 22:44:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.723 22:44:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.723 22:44:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.981 22:44:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.981 22:44:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3489594 00:14:18.981 22:44:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.981 22:44:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.981 22:44:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.546 22:44:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.546 22:44:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3489594 00:14:19.546 22:44:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.546 22:44:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.546 22:44:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.803 22:44:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.803 22:44:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3489594 00:14:19.803 22:44:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.803 22:44:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.803 22:44:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.061 22:44:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.061 22:44:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3489594 00:14:20.061 22:44:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.061 22:44:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.061 22:44:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.319 22:44:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.319 22:44:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3489594 00:14:20.319 22:44:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.319 22:44:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.319 22:44:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.577 22:44:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.577 22:44:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3489594 00:14:20.577 22:44:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.577 22:44:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.577 22:44:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.140 22:44:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.140 22:44:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3489594 00:14:21.140 22:44:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.140 22:44:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.140 22:44:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.397 22:44:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.397 22:44:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3489594 00:14:21.397 22:44:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.397 22:44:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.397 22:44:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.654 22:44:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.654 22:44:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3489594 00:14:21.654 22:44:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.654 22:44:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.654 22:44:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.911 22:44:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.911 22:44:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3489594 00:14:21.911 22:44:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.911 22:44:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.911 22:44:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:22.168 22:44:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.168 22:44:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3489594 00:14:22.168 22:44:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.168 22:44:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.168 22:44:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:22.732 22:44:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.732 22:44:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3489594 00:14:22.732 22:44:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.732 22:44:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.732 22:44:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:22.989 22:44:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.989 22:44:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3489594 00:14:22.989 22:44:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.989 22:44:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.989 22:44:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.246 22:44:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.247 22:44:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3489594 00:14:23.247 22:44:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.247 22:44:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.247 22:44:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.504 22:44:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.504 22:44:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3489594 00:14:23.504 22:44:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.504 22:44:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.504 22:44:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.762 22:44:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.762 22:44:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3489594 00:14:23.762 22:44:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.762 22:44:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.762 22:44:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.327 22:44:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.327 22:44:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3489594 00:14:24.327 22:44:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.327 22:44:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.327 22:44:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.584 22:44:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.584 22:44:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3489594 00:14:24.584 22:44:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.584 22:44:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.584 22:44:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.842 22:44:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.842 22:44:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3489594 00:14:24.842 22:44:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.842 22:44:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.842 22:44:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:25.099 22:44:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.099 22:44:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3489594 00:14:25.099 22:44:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.099 22:44:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.099 22:44:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:25.664 22:44:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.664 22:44:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3489594 00:14:25.664 22:44:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.664 22:44:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.664 22:44:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:25.921 22:44:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.921 22:44:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3489594 00:14:25.921 22:44:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.921 22:44:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.921 22:44:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:26.179 22:44:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.179 22:44:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3489594 00:14:26.179 22:44:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.179 22:44:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.179 22:44:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:26.436 22:44:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.436 22:44:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3489594 00:14:26.436 22:44:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.436 22:44:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.436 22:44:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:26.693 22:44:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.693 22:44:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3489594 00:14:26.693 22:44:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.693 22:44:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.693 22:44:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:27.258 22:44:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.258 22:44:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3489594 00:14:27.258 22:44:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:27.258 22:44:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.258 22:44:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:27.258 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:27.516 22:44:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.516 22:44:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3489594 00:14:27.516 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3489594) - No such process 00:14:27.516 22:44:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3489594 00:14:27.516 22:44:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:27.516 22:44:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:27.516 22:44:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:27.516 22:44:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:27.516 22:44:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:27.516 22:44:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:27.516 22:44:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:27.516 22:44:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:27.516 22:44:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:27.516 rmmod nvme_tcp 00:14:27.516 rmmod nvme_fabrics 00:14:27.516 rmmod nvme_keyring 00:14:27.516 22:44:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:27.516 22:44:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:27.516 22:44:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:27.516 22:44:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3489566 ']' 00:14:27.516 22:44:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3489566 00:14:27.516 22:44:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 3489566 ']' 00:14:27.516 22:44:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 3489566 00:14:27.516 22:44:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:14:27.516 22:44:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:27.517 22:44:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3489566 00:14:27.517 22:44:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:27.517 22:44:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:27.517 22:44:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3489566' 00:14:27.517 killing process with pid 3489566 00:14:27.517 22:44:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 3489566 00:14:27.517 22:44:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 3489566 00:14:27.776 22:44:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:27.776 22:44:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:27.776 22:44:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:27.776 22:44:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:27.776 22:44:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:27.776 22:44:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.776 22:44:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:27.776 22:44:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.676 22:44:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:29.676 00:14:29.676 real 0m15.263s 00:14:29.676 user 0m37.962s 00:14:29.676 sys 0m6.095s 00:14:29.676 22:44:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:29.676 22:44:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:29.676 ************************************ 00:14:29.676 END TEST nvmf_connect_stress 00:14:29.676 ************************************ 00:14:29.676 22:44:22 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:29.676 22:44:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:29.676 22:44:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:29.676 22:44:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:29.935 ************************************ 00:14:29.935 START TEST nvmf_fused_ordering 00:14:29.935 ************************************ 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:29.935 * Looking for test storage... 00:14:29.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:29.935 22:44:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:31.834 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:31.834 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:31.834 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:31.834 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:31.834 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:31.835 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:31.835 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:31.835 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:31.835 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:31.835 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:31.835 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:31.835 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:31.835 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:31.835 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:31.835 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:31.835 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:31.835 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:31.835 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:31.835 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:31.835 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:31.835 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:31.835 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:31.835 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:32.092 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:32.092 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:32.092 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:32.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:32.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:14:32.092 00:14:32.092 --- 10.0.0.2 ping statistics --- 00:14:32.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.092 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:14:32.092 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:32.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:32.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:14:32.092 00:14:32.092 --- 10.0.0.1 ping statistics --- 00:14:32.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.092 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:14:32.092 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:32.092 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:32.092 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:32.092 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:32.092 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:32.092 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:32.092 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:32.092 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:32.092 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:32.092 22:44:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:32.092 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:32.092 22:44:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:32.092 22:44:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:32.092 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3492860 00:14:32.092 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:32.092 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3492860 00:14:32.092 22:44:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 3492860 ']' 00:14:32.092 22:44:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.092 22:44:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:32.092 22:44:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.092 22:44:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:32.092 22:44:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:32.092 [2024-07-26 22:44:24.441870] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:32.092 [2024-07-26 22:44:24.441944] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:32.092 EAL: No free 2048 kB hugepages reported on node 1 00:14:32.092 [2024-07-26 22:44:24.504090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.092 [2024-07-26 22:44:24.586959] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:32.092 [2024-07-26 22:44:24.587009] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:32.092 [2024-07-26 22:44:24.587032] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:32.092 [2024-07-26 22:44:24.587043] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:32.092 [2024-07-26 22:44:24.587076] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:32.092 [2024-07-26 22:44:24.587101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.350 22:44:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:32.350 22:44:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:14:32.350 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:32.350 22:44:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:32.350 22:44:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:32.350 22:44:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.350 22:44:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:32.350 22:44:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.350 22:44:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:32.350 [2024-07-26 22:44:24.729271] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:32.350 22:44:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.350 22:44:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:32.350 22:44:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.350 22:44:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:32.350 22:44:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.350 22:44:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:32.350 22:44:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.350 22:44:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:32.350 [2024-07-26 22:44:24.745506] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:32.350 22:44:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.350 22:44:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:32.350 22:44:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.350 22:44:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:32.350 NULL1 00:14:32.350 22:44:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.350 22:44:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:32.350 22:44:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.350 22:44:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:32.350 22:44:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.350 22:44:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:32.350 22:44:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.350 22:44:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:32.350 22:44:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.350 22:44:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:32.350 [2024-07-26 22:44:24.789407] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:32.350 [2024-07-26 22:44:24.789450] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3492882 ] 00:14:32.350 EAL: No free 2048 kB hugepages reported on node 1 00:14:32.914 Attached to nqn.2016-06.io.spdk:cnode1 00:14:32.914 Namespace ID: 1 size: 1GB 00:14:32.914 fused_ordering(0) 00:14:32.914 fused_ordering(1) 00:14:32.915 fused_ordering(2) 00:14:32.915 fused_ordering(3) 00:14:32.915 fused_ordering(4) 00:14:32.915 fused_ordering(5) 00:14:32.915 fused_ordering(6) 00:14:32.915 fused_ordering(7) 00:14:32.915 fused_ordering(8) 00:14:32.915 fused_ordering(9) 00:14:32.915 fused_ordering(10) 00:14:32.915 fused_ordering(11) 00:14:32.915 fused_ordering(12) 00:14:32.915 fused_ordering(13) 00:14:32.915 fused_ordering(14) 00:14:32.915 fused_ordering(15) 00:14:32.915 fused_ordering(16) 00:14:32.915 fused_ordering(17) 00:14:32.915 fused_ordering(18) 00:14:32.915 fused_ordering(19) 00:14:32.915 fused_ordering(20) 00:14:32.915 fused_ordering(21) 00:14:32.915 fused_ordering(22) 00:14:32.915 fused_ordering(23) 00:14:32.915 fused_ordering(24) 00:14:32.915 fused_ordering(25) 00:14:32.915 fused_ordering(26) 00:14:32.915 fused_ordering(27) 00:14:32.915 fused_ordering(28) 00:14:32.915 fused_ordering(29) 00:14:32.915 fused_ordering(30) 00:14:32.915 fused_ordering(31) 00:14:32.915 fused_ordering(32) 00:14:32.915 fused_ordering(33) 00:14:32.915 fused_ordering(34) 00:14:32.915 fused_ordering(35) 00:14:32.915 fused_ordering(36) 00:14:32.915 fused_ordering(37) 00:14:32.915 fused_ordering(38) 00:14:32.915 fused_ordering(39) 00:14:32.915 fused_ordering(40) 00:14:32.915 fused_ordering(41) 00:14:32.915 fused_ordering(42) 00:14:32.915 fused_ordering(43) 00:14:32.915 fused_ordering(44) 00:14:32.915 fused_ordering(45) 00:14:32.915 fused_ordering(46) 00:14:32.915 fused_ordering(47) 00:14:32.915 fused_ordering(48) 00:14:32.915 fused_ordering(49) 00:14:32.915 fused_ordering(50) 00:14:32.915 fused_ordering(51) 00:14:32.915 fused_ordering(52) 00:14:32.915 fused_ordering(53) 00:14:32.915 fused_ordering(54) 00:14:32.915 fused_ordering(55) 00:14:32.915 fused_ordering(56) 00:14:32.915 fused_ordering(57) 00:14:32.915 fused_ordering(58) 00:14:32.915 fused_ordering(59) 00:14:32.915 fused_ordering(60) 00:14:32.915 fused_ordering(61) 00:14:32.915 fused_ordering(62) 00:14:32.915 fused_ordering(63) 00:14:32.915 fused_ordering(64) 00:14:32.915 fused_ordering(65) 00:14:32.915 fused_ordering(66) 00:14:32.915 fused_ordering(67) 00:14:32.915 fused_ordering(68) 00:14:32.915 fused_ordering(69) 00:14:32.915 fused_ordering(70) 00:14:32.915 fused_ordering(71) 00:14:32.915 fused_ordering(72) 00:14:32.915 fused_ordering(73) 00:14:32.915 fused_ordering(74) 00:14:32.915 fused_ordering(75) 00:14:32.915 fused_ordering(76) 00:14:32.915 fused_ordering(77) 00:14:32.915 fused_ordering(78) 00:14:32.915 fused_ordering(79) 00:14:32.915 fused_ordering(80) 00:14:32.915 fused_ordering(81) 00:14:32.915 fused_ordering(82) 00:14:32.915 fused_ordering(83) 00:14:32.915 fused_ordering(84) 00:14:32.915 fused_ordering(85) 00:14:32.915 fused_ordering(86) 00:14:32.915 fused_ordering(87) 00:14:32.915 fused_ordering(88) 00:14:32.915 fused_ordering(89) 00:14:32.915 fused_ordering(90) 00:14:32.915 fused_ordering(91) 00:14:32.915 fused_ordering(92) 00:14:32.915 fused_ordering(93) 00:14:32.915 fused_ordering(94) 00:14:32.915 fused_ordering(95) 00:14:32.915 fused_ordering(96) 00:14:32.915 fused_ordering(97) 00:14:32.915 fused_ordering(98) 00:14:32.915 fused_ordering(99) 00:14:32.915 fused_ordering(100) 00:14:32.915 fused_ordering(101) 00:14:32.915 fused_ordering(102) 00:14:32.915 fused_ordering(103) 00:14:32.915 fused_ordering(104) 00:14:32.915 fused_ordering(105) 00:14:32.915 fused_ordering(106) 00:14:32.915 fused_ordering(107) 00:14:32.915 fused_ordering(108) 00:14:32.915 fused_ordering(109) 00:14:32.915 fused_ordering(110) 00:14:32.915 fused_ordering(111) 00:14:32.915 fused_ordering(112) 00:14:32.915 fused_ordering(113) 00:14:32.915 fused_ordering(114) 00:14:32.915 fused_ordering(115) 00:14:32.915 fused_ordering(116) 00:14:32.915 fused_ordering(117) 00:14:32.915 fused_ordering(118) 00:14:32.915 fused_ordering(119) 00:14:32.915 fused_ordering(120) 00:14:32.915 fused_ordering(121) 00:14:32.915 fused_ordering(122) 00:14:32.915 fused_ordering(123) 00:14:32.915 fused_ordering(124) 00:14:32.915 fused_ordering(125) 00:14:32.915 fused_ordering(126) 00:14:32.915 fused_ordering(127) 00:14:32.915 fused_ordering(128) 00:14:32.915 fused_ordering(129) 00:14:32.915 fused_ordering(130) 00:14:32.915 fused_ordering(131) 00:14:32.915 fused_ordering(132) 00:14:32.915 fused_ordering(133) 00:14:32.915 fused_ordering(134) 00:14:32.915 fused_ordering(135) 00:14:32.915 fused_ordering(136) 00:14:32.915 fused_ordering(137) 00:14:32.915 fused_ordering(138) 00:14:32.915 fused_ordering(139) 00:14:32.915 fused_ordering(140) 00:14:32.915 fused_ordering(141) 00:14:32.915 fused_ordering(142) 00:14:32.915 fused_ordering(143) 00:14:32.915 fused_ordering(144) 00:14:32.915 fused_ordering(145) 00:14:32.915 fused_ordering(146) 00:14:32.915 fused_ordering(147) 00:14:32.915 fused_ordering(148) 00:14:32.915 fused_ordering(149) 00:14:32.915 fused_ordering(150) 00:14:32.915 fused_ordering(151) 00:14:32.915 fused_ordering(152) 00:14:32.915 fused_ordering(153) 00:14:32.915 fused_ordering(154) 00:14:32.915 fused_ordering(155) 00:14:32.915 fused_ordering(156) 00:14:32.915 fused_ordering(157) 00:14:32.915 fused_ordering(158) 00:14:32.915 fused_ordering(159) 00:14:32.915 fused_ordering(160) 00:14:32.915 fused_ordering(161) 00:14:32.915 fused_ordering(162) 00:14:32.915 fused_ordering(163) 00:14:32.915 fused_ordering(164) 00:14:32.915 fused_ordering(165) 00:14:32.915 fused_ordering(166) 00:14:32.915 fused_ordering(167) 00:14:32.915 fused_ordering(168) 00:14:32.915 fused_ordering(169) 00:14:32.915 fused_ordering(170) 00:14:32.915 fused_ordering(171) 00:14:32.915 fused_ordering(172) 00:14:32.915 fused_ordering(173) 00:14:32.915 fused_ordering(174) 00:14:32.915 fused_ordering(175) 00:14:32.915 fused_ordering(176) 00:14:32.915 fused_ordering(177) 00:14:32.915 fused_ordering(178) 00:14:32.915 fused_ordering(179) 00:14:32.915 fused_ordering(180) 00:14:32.915 fused_ordering(181) 00:14:32.915 fused_ordering(182) 00:14:32.915 fused_ordering(183) 00:14:32.915 fused_ordering(184) 00:14:32.915 fused_ordering(185) 00:14:32.915 fused_ordering(186) 00:14:32.915 fused_ordering(187) 00:14:32.915 fused_ordering(188) 00:14:32.915 fused_ordering(189) 00:14:32.915 fused_ordering(190) 00:14:32.915 fused_ordering(191) 00:14:32.915 fused_ordering(192) 00:14:32.915 fused_ordering(193) 00:14:32.915 fused_ordering(194) 00:14:32.915 fused_ordering(195) 00:14:32.915 fused_ordering(196) 00:14:32.915 fused_ordering(197) 00:14:32.915 fused_ordering(198) 00:14:32.915 fused_ordering(199) 00:14:32.915 fused_ordering(200) 00:14:32.915 fused_ordering(201) 00:14:32.915 fused_ordering(202) 00:14:32.915 fused_ordering(203) 00:14:32.915 fused_ordering(204) 00:14:32.915 fused_ordering(205) 00:14:33.479 fused_ordering(206) 00:14:33.479 fused_ordering(207) 00:14:33.479 fused_ordering(208) 00:14:33.479 fused_ordering(209) 00:14:33.479 fused_ordering(210) 00:14:33.479 fused_ordering(211) 00:14:33.479 fused_ordering(212) 00:14:33.479 fused_ordering(213) 00:14:33.479 fused_ordering(214) 00:14:33.479 fused_ordering(215) 00:14:33.479 fused_ordering(216) 00:14:33.479 fused_ordering(217) 00:14:33.479 fused_ordering(218) 00:14:33.479 fused_ordering(219) 00:14:33.479 fused_ordering(220) 00:14:33.479 fused_ordering(221) 00:14:33.479 fused_ordering(222) 00:14:33.479 fused_ordering(223) 00:14:33.479 fused_ordering(224) 00:14:33.479 fused_ordering(225) 00:14:33.479 fused_ordering(226) 00:14:33.479 fused_ordering(227) 00:14:33.479 fused_ordering(228) 00:14:33.479 fused_ordering(229) 00:14:33.479 fused_ordering(230) 00:14:33.479 fused_ordering(231) 00:14:33.479 fused_ordering(232) 00:14:33.479 fused_ordering(233) 00:14:33.479 fused_ordering(234) 00:14:33.479 fused_ordering(235) 00:14:33.480 fused_ordering(236) 00:14:33.480 fused_ordering(237) 00:14:33.480 fused_ordering(238) 00:14:33.480 fused_ordering(239) 00:14:33.480 fused_ordering(240) 00:14:33.480 fused_ordering(241) 00:14:33.480 fused_ordering(242) 00:14:33.480 fused_ordering(243) 00:14:33.480 fused_ordering(244) 00:14:33.480 fused_ordering(245) 00:14:33.480 fused_ordering(246) 00:14:33.480 fused_ordering(247) 00:14:33.480 fused_ordering(248) 00:14:33.480 fused_ordering(249) 00:14:33.480 fused_ordering(250) 00:14:33.480 fused_ordering(251) 00:14:33.480 fused_ordering(252) 00:14:33.480 fused_ordering(253) 00:14:33.480 fused_ordering(254) 00:14:33.480 fused_ordering(255) 00:14:33.480 fused_ordering(256) 00:14:33.480 fused_ordering(257) 00:14:33.480 fused_ordering(258) 00:14:33.480 fused_ordering(259) 00:14:33.480 fused_ordering(260) 00:14:33.480 fused_ordering(261) 00:14:33.480 fused_ordering(262) 00:14:33.480 fused_ordering(263) 00:14:33.480 fused_ordering(264) 00:14:33.480 fused_ordering(265) 00:14:33.480 fused_ordering(266) 00:14:33.480 fused_ordering(267) 00:14:33.480 fused_ordering(268) 00:14:33.480 fused_ordering(269) 00:14:33.480 fused_ordering(270) 00:14:33.480 fused_ordering(271) 00:14:33.480 fused_ordering(272) 00:14:33.480 fused_ordering(273) 00:14:33.480 fused_ordering(274) 00:14:33.480 fused_ordering(275) 00:14:33.480 fused_ordering(276) 00:14:33.480 fused_ordering(277) 00:14:33.480 fused_ordering(278) 00:14:33.480 fused_ordering(279) 00:14:33.480 fused_ordering(280) 00:14:33.480 fused_ordering(281) 00:14:33.480 fused_ordering(282) 00:14:33.480 fused_ordering(283) 00:14:33.480 fused_ordering(284) 00:14:33.480 fused_ordering(285) 00:14:33.480 fused_ordering(286) 00:14:33.480 fused_ordering(287) 00:14:33.480 fused_ordering(288) 00:14:33.480 fused_ordering(289) 00:14:33.480 fused_ordering(290) 00:14:33.480 fused_ordering(291) 00:14:33.480 fused_ordering(292) 00:14:33.480 fused_ordering(293) 00:14:33.480 fused_ordering(294) 00:14:33.480 fused_ordering(295) 00:14:33.480 fused_ordering(296) 00:14:33.480 fused_ordering(297) 00:14:33.480 fused_ordering(298) 00:14:33.480 fused_ordering(299) 00:14:33.480 fused_ordering(300) 00:14:33.480 fused_ordering(301) 00:14:33.480 fused_ordering(302) 00:14:33.480 fused_ordering(303) 00:14:33.480 fused_ordering(304) 00:14:33.480 fused_ordering(305) 00:14:33.480 fused_ordering(306) 00:14:33.480 fused_ordering(307) 00:14:33.480 fused_ordering(308) 00:14:33.480 fused_ordering(309) 00:14:33.480 fused_ordering(310) 00:14:33.480 fused_ordering(311) 00:14:33.480 fused_ordering(312) 00:14:33.480 fused_ordering(313) 00:14:33.480 fused_ordering(314) 00:14:33.480 fused_ordering(315) 00:14:33.480 fused_ordering(316) 00:14:33.480 fused_ordering(317) 00:14:33.480 fused_ordering(318) 00:14:33.480 fused_ordering(319) 00:14:33.480 fused_ordering(320) 00:14:33.480 fused_ordering(321) 00:14:33.480 fused_ordering(322) 00:14:33.480 fused_ordering(323) 00:14:33.480 fused_ordering(324) 00:14:33.480 fused_ordering(325) 00:14:33.480 fused_ordering(326) 00:14:33.480 fused_ordering(327) 00:14:33.480 fused_ordering(328) 00:14:33.480 fused_ordering(329) 00:14:33.480 fused_ordering(330) 00:14:33.480 fused_ordering(331) 00:14:33.480 fused_ordering(332) 00:14:33.480 fused_ordering(333) 00:14:33.480 fused_ordering(334) 00:14:33.480 fused_ordering(335) 00:14:33.480 fused_ordering(336) 00:14:33.480 fused_ordering(337) 00:14:33.480 fused_ordering(338) 00:14:33.480 fused_ordering(339) 00:14:33.480 fused_ordering(340) 00:14:33.480 fused_ordering(341) 00:14:33.480 fused_ordering(342) 00:14:33.480 fused_ordering(343) 00:14:33.480 fused_ordering(344) 00:14:33.480 fused_ordering(345) 00:14:33.480 fused_ordering(346) 00:14:33.480 fused_ordering(347) 00:14:33.480 fused_ordering(348) 00:14:33.480 fused_ordering(349) 00:14:33.480 fused_ordering(350) 00:14:33.480 fused_ordering(351) 00:14:33.480 fused_ordering(352) 00:14:33.480 fused_ordering(353) 00:14:33.480 fused_ordering(354) 00:14:33.480 fused_ordering(355) 00:14:33.480 fused_ordering(356) 00:14:33.480 fused_ordering(357) 00:14:33.480 fused_ordering(358) 00:14:33.480 fused_ordering(359) 00:14:33.480 fused_ordering(360) 00:14:33.480 fused_ordering(361) 00:14:33.480 fused_ordering(362) 00:14:33.480 fused_ordering(363) 00:14:33.480 fused_ordering(364) 00:14:33.480 fused_ordering(365) 00:14:33.480 fused_ordering(366) 00:14:33.480 fused_ordering(367) 00:14:33.480 fused_ordering(368) 00:14:33.480 fused_ordering(369) 00:14:33.480 fused_ordering(370) 00:14:33.480 fused_ordering(371) 00:14:33.480 fused_ordering(372) 00:14:33.480 fused_ordering(373) 00:14:33.480 fused_ordering(374) 00:14:33.480 fused_ordering(375) 00:14:33.480 fused_ordering(376) 00:14:33.480 fused_ordering(377) 00:14:33.480 fused_ordering(378) 00:14:33.480 fused_ordering(379) 00:14:33.480 fused_ordering(380) 00:14:33.480 fused_ordering(381) 00:14:33.480 fused_ordering(382) 00:14:33.480 fused_ordering(383) 00:14:33.480 fused_ordering(384) 00:14:33.480 fused_ordering(385) 00:14:33.480 fused_ordering(386) 00:14:33.480 fused_ordering(387) 00:14:33.480 fused_ordering(388) 00:14:33.480 fused_ordering(389) 00:14:33.480 fused_ordering(390) 00:14:33.480 fused_ordering(391) 00:14:33.480 fused_ordering(392) 00:14:33.480 fused_ordering(393) 00:14:33.480 fused_ordering(394) 00:14:33.480 fused_ordering(395) 00:14:33.480 fused_ordering(396) 00:14:33.480 fused_ordering(397) 00:14:33.480 fused_ordering(398) 00:14:33.480 fused_ordering(399) 00:14:33.480 fused_ordering(400) 00:14:33.480 fused_ordering(401) 00:14:33.480 fused_ordering(402) 00:14:33.480 fused_ordering(403) 00:14:33.480 fused_ordering(404) 00:14:33.480 fused_ordering(405) 00:14:33.480 fused_ordering(406) 00:14:33.480 fused_ordering(407) 00:14:33.480 fused_ordering(408) 00:14:33.480 fused_ordering(409) 00:14:33.480 fused_ordering(410) 00:14:34.087 fused_ordering(411) 00:14:34.087 fused_ordering(412) 00:14:34.087 fused_ordering(413) 00:14:34.087 fused_ordering(414) 00:14:34.087 fused_ordering(415) 00:14:34.087 fused_ordering(416) 00:14:34.087 fused_ordering(417) 00:14:34.087 fused_ordering(418) 00:14:34.087 fused_ordering(419) 00:14:34.087 fused_ordering(420) 00:14:34.087 fused_ordering(421) 00:14:34.087 fused_ordering(422) 00:14:34.087 fused_ordering(423) 00:14:34.087 fused_ordering(424) 00:14:34.087 fused_ordering(425) 00:14:34.087 fused_ordering(426) 00:14:34.087 fused_ordering(427) 00:14:34.087 fused_ordering(428) 00:14:34.087 fused_ordering(429) 00:14:34.087 fused_ordering(430) 00:14:34.087 fused_ordering(431) 00:14:34.087 fused_ordering(432) 00:14:34.087 fused_ordering(433) 00:14:34.087 fused_ordering(434) 00:14:34.087 fused_ordering(435) 00:14:34.087 fused_ordering(436) 00:14:34.087 fused_ordering(437) 00:14:34.087 fused_ordering(438) 00:14:34.087 fused_ordering(439) 00:14:34.087 fused_ordering(440) 00:14:34.087 fused_ordering(441) 00:14:34.087 fused_ordering(442) 00:14:34.087 fused_ordering(443) 00:14:34.087 fused_ordering(444) 00:14:34.087 fused_ordering(445) 00:14:34.087 fused_ordering(446) 00:14:34.087 fused_ordering(447) 00:14:34.087 fused_ordering(448) 00:14:34.087 fused_ordering(449) 00:14:34.087 fused_ordering(450) 00:14:34.087 fused_ordering(451) 00:14:34.087 fused_ordering(452) 00:14:34.087 fused_ordering(453) 00:14:34.087 fused_ordering(454) 00:14:34.087 fused_ordering(455) 00:14:34.087 fused_ordering(456) 00:14:34.087 fused_ordering(457) 00:14:34.087 fused_ordering(458) 00:14:34.087 fused_ordering(459) 00:14:34.087 fused_ordering(460) 00:14:34.087 fused_ordering(461) 00:14:34.087 fused_ordering(462) 00:14:34.087 fused_ordering(463) 00:14:34.087 fused_ordering(464) 00:14:34.087 fused_ordering(465) 00:14:34.087 fused_ordering(466) 00:14:34.087 fused_ordering(467) 00:14:34.087 fused_ordering(468) 00:14:34.087 fused_ordering(469) 00:14:34.087 fused_ordering(470) 00:14:34.087 fused_ordering(471) 00:14:34.087 fused_ordering(472) 00:14:34.087 fused_ordering(473) 00:14:34.087 fused_ordering(474) 00:14:34.087 fused_ordering(475) 00:14:34.087 fused_ordering(476) 00:14:34.087 fused_ordering(477) 00:14:34.087 fused_ordering(478) 00:14:34.087 fused_ordering(479) 00:14:34.087 fused_ordering(480) 00:14:34.087 fused_ordering(481) 00:14:34.087 fused_ordering(482) 00:14:34.087 fused_ordering(483) 00:14:34.087 fused_ordering(484) 00:14:34.087 fused_ordering(485) 00:14:34.087 fused_ordering(486) 00:14:34.087 fused_ordering(487) 00:14:34.087 fused_ordering(488) 00:14:34.087 fused_ordering(489) 00:14:34.087 fused_ordering(490) 00:14:34.087 fused_ordering(491) 00:14:34.087 fused_ordering(492) 00:14:34.087 fused_ordering(493) 00:14:34.087 fused_ordering(494) 00:14:34.087 fused_ordering(495) 00:14:34.087 fused_ordering(496) 00:14:34.087 fused_ordering(497) 00:14:34.087 fused_ordering(498) 00:14:34.087 fused_ordering(499) 00:14:34.087 fused_ordering(500) 00:14:34.087 fused_ordering(501) 00:14:34.087 fused_ordering(502) 00:14:34.087 fused_ordering(503) 00:14:34.087 fused_ordering(504) 00:14:34.087 fused_ordering(505) 00:14:34.087 fused_ordering(506) 00:14:34.087 fused_ordering(507) 00:14:34.087 fused_ordering(508) 00:14:34.087 fused_ordering(509) 00:14:34.087 fused_ordering(510) 00:14:34.087 fused_ordering(511) 00:14:34.087 fused_ordering(512) 00:14:34.087 fused_ordering(513) 00:14:34.087 fused_ordering(514) 00:14:34.087 fused_ordering(515) 00:14:34.087 fused_ordering(516) 00:14:34.087 fused_ordering(517) 00:14:34.087 fused_ordering(518) 00:14:34.087 fused_ordering(519) 00:14:34.087 fused_ordering(520) 00:14:34.087 fused_ordering(521) 00:14:34.087 fused_ordering(522) 00:14:34.087 fused_ordering(523) 00:14:34.087 fused_ordering(524) 00:14:34.087 fused_ordering(525) 00:14:34.087 fused_ordering(526) 00:14:34.087 fused_ordering(527) 00:14:34.087 fused_ordering(528) 00:14:34.087 fused_ordering(529) 00:14:34.087 fused_ordering(530) 00:14:34.087 fused_ordering(531) 00:14:34.087 fused_ordering(532) 00:14:34.087 fused_ordering(533) 00:14:34.087 fused_ordering(534) 00:14:34.087 fused_ordering(535) 00:14:34.087 fused_ordering(536) 00:14:34.087 fused_ordering(537) 00:14:34.087 fused_ordering(538) 00:14:34.087 fused_ordering(539) 00:14:34.087 fused_ordering(540) 00:14:34.087 fused_ordering(541) 00:14:34.087 fused_ordering(542) 00:14:34.087 fused_ordering(543) 00:14:34.087 fused_ordering(544) 00:14:34.087 fused_ordering(545) 00:14:34.087 fused_ordering(546) 00:14:34.087 fused_ordering(547) 00:14:34.087 fused_ordering(548) 00:14:34.087 fused_ordering(549) 00:14:34.087 fused_ordering(550) 00:14:34.087 fused_ordering(551) 00:14:34.087 fused_ordering(552) 00:14:34.087 fused_ordering(553) 00:14:34.087 fused_ordering(554) 00:14:34.087 fused_ordering(555) 00:14:34.087 fused_ordering(556) 00:14:34.087 fused_ordering(557) 00:14:34.087 fused_ordering(558) 00:14:34.087 fused_ordering(559) 00:14:34.087 fused_ordering(560) 00:14:34.087 fused_ordering(561) 00:14:34.087 fused_ordering(562) 00:14:34.087 fused_ordering(563) 00:14:34.087 fused_ordering(564) 00:14:34.087 fused_ordering(565) 00:14:34.087 fused_ordering(566) 00:14:34.087 fused_ordering(567) 00:14:34.087 fused_ordering(568) 00:14:34.087 fused_ordering(569) 00:14:34.087 fused_ordering(570) 00:14:34.087 fused_ordering(571) 00:14:34.087 fused_ordering(572) 00:14:34.087 fused_ordering(573) 00:14:34.087 fused_ordering(574) 00:14:34.087 fused_ordering(575) 00:14:34.087 fused_ordering(576) 00:14:34.087 fused_ordering(577) 00:14:34.087 fused_ordering(578) 00:14:34.087 fused_ordering(579) 00:14:34.087 fused_ordering(580) 00:14:34.087 fused_ordering(581) 00:14:34.087 fused_ordering(582) 00:14:34.087 fused_ordering(583) 00:14:34.087 fused_ordering(584) 00:14:34.087 fused_ordering(585) 00:14:34.087 fused_ordering(586) 00:14:34.087 fused_ordering(587) 00:14:34.087 fused_ordering(588) 00:14:34.087 fused_ordering(589) 00:14:34.088 fused_ordering(590) 00:14:34.088 fused_ordering(591) 00:14:34.088 fused_ordering(592) 00:14:34.088 fused_ordering(593) 00:14:34.088 fused_ordering(594) 00:14:34.088 fused_ordering(595) 00:14:34.088 fused_ordering(596) 00:14:34.088 fused_ordering(597) 00:14:34.088 fused_ordering(598) 00:14:34.088 fused_ordering(599) 00:14:34.088 fused_ordering(600) 00:14:34.088 fused_ordering(601) 00:14:34.088 fused_ordering(602) 00:14:34.088 fused_ordering(603) 00:14:34.088 fused_ordering(604) 00:14:34.088 fused_ordering(605) 00:14:34.088 fused_ordering(606) 00:14:34.088 fused_ordering(607) 00:14:34.088 fused_ordering(608) 00:14:34.088 fused_ordering(609) 00:14:34.088 fused_ordering(610) 00:14:34.088 fused_ordering(611) 00:14:34.088 fused_ordering(612) 00:14:34.088 fused_ordering(613) 00:14:34.088 fused_ordering(614) 00:14:34.088 fused_ordering(615) 00:14:35.024 fused_ordering(616) 00:14:35.024 fused_ordering(617) 00:14:35.024 fused_ordering(618) 00:14:35.024 fused_ordering(619) 00:14:35.024 fused_ordering(620) 00:14:35.024 fused_ordering(621) 00:14:35.024 fused_ordering(622) 00:14:35.024 fused_ordering(623) 00:14:35.024 fused_ordering(624) 00:14:35.024 fused_ordering(625) 00:14:35.024 fused_ordering(626) 00:14:35.024 fused_ordering(627) 00:14:35.024 fused_ordering(628) 00:14:35.024 fused_ordering(629) 00:14:35.024 fused_ordering(630) 00:14:35.024 fused_ordering(631) 00:14:35.024 fused_ordering(632) 00:14:35.024 fused_ordering(633) 00:14:35.024 fused_ordering(634) 00:14:35.024 fused_ordering(635) 00:14:35.024 fused_ordering(636) 00:14:35.024 fused_ordering(637) 00:14:35.024 fused_ordering(638) 00:14:35.024 fused_ordering(639) 00:14:35.024 fused_ordering(640) 00:14:35.024 fused_ordering(641) 00:14:35.024 fused_ordering(642) 00:14:35.024 fused_ordering(643) 00:14:35.024 fused_ordering(644) 00:14:35.024 fused_ordering(645) 00:14:35.024 fused_ordering(646) 00:14:35.024 fused_ordering(647) 00:14:35.024 fused_ordering(648) 00:14:35.024 fused_ordering(649) 00:14:35.024 fused_ordering(650) 00:14:35.024 fused_ordering(651) 00:14:35.024 fused_ordering(652) 00:14:35.024 fused_ordering(653) 00:14:35.024 fused_ordering(654) 00:14:35.024 fused_ordering(655) 00:14:35.024 fused_ordering(656) 00:14:35.024 fused_ordering(657) 00:14:35.024 fused_ordering(658) 00:14:35.024 fused_ordering(659) 00:14:35.024 fused_ordering(660) 00:14:35.024 fused_ordering(661) 00:14:35.024 fused_ordering(662) 00:14:35.024 fused_ordering(663) 00:14:35.024 fused_ordering(664) 00:14:35.024 fused_ordering(665) 00:14:35.024 fused_ordering(666) 00:14:35.024 fused_ordering(667) 00:14:35.024 fused_ordering(668) 00:14:35.024 fused_ordering(669) 00:14:35.024 fused_ordering(670) 00:14:35.024 fused_ordering(671) 00:14:35.024 fused_ordering(672) 00:14:35.024 fused_ordering(673) 00:14:35.024 fused_ordering(674) 00:14:35.024 fused_ordering(675) 00:14:35.024 fused_ordering(676) 00:14:35.024 fused_ordering(677) 00:14:35.024 fused_ordering(678) 00:14:35.024 fused_ordering(679) 00:14:35.024 fused_ordering(680) 00:14:35.024 fused_ordering(681) 00:14:35.024 fused_ordering(682) 00:14:35.024 fused_ordering(683) 00:14:35.024 fused_ordering(684) 00:14:35.024 fused_ordering(685) 00:14:35.024 fused_ordering(686) 00:14:35.024 fused_ordering(687) 00:14:35.024 fused_ordering(688) 00:14:35.024 fused_ordering(689) 00:14:35.024 fused_ordering(690) 00:14:35.024 fused_ordering(691) 00:14:35.024 fused_ordering(692) 00:14:35.024 fused_ordering(693) 00:14:35.024 fused_ordering(694) 00:14:35.024 fused_ordering(695) 00:14:35.024 fused_ordering(696) 00:14:35.024 fused_ordering(697) 00:14:35.024 fused_ordering(698) 00:14:35.024 fused_ordering(699) 00:14:35.024 fused_ordering(700) 00:14:35.024 fused_ordering(701) 00:14:35.024 fused_ordering(702) 00:14:35.024 fused_ordering(703) 00:14:35.024 fused_ordering(704) 00:14:35.024 fused_ordering(705) 00:14:35.024 fused_ordering(706) 00:14:35.024 fused_ordering(707) 00:14:35.024 fused_ordering(708) 00:14:35.024 fused_ordering(709) 00:14:35.024 fused_ordering(710) 00:14:35.024 fused_ordering(711) 00:14:35.024 fused_ordering(712) 00:14:35.024 fused_ordering(713) 00:14:35.024 fused_ordering(714) 00:14:35.024 fused_ordering(715) 00:14:35.024 fused_ordering(716) 00:14:35.024 fused_ordering(717) 00:14:35.024 fused_ordering(718) 00:14:35.024 fused_ordering(719) 00:14:35.024 fused_ordering(720) 00:14:35.024 fused_ordering(721) 00:14:35.024 fused_ordering(722) 00:14:35.025 fused_ordering(723) 00:14:35.025 fused_ordering(724) 00:14:35.025 fused_ordering(725) 00:14:35.025 fused_ordering(726) 00:14:35.025 fused_ordering(727) 00:14:35.025 fused_ordering(728) 00:14:35.025 fused_ordering(729) 00:14:35.025 fused_ordering(730) 00:14:35.025 fused_ordering(731) 00:14:35.025 fused_ordering(732) 00:14:35.025 fused_ordering(733) 00:14:35.025 fused_ordering(734) 00:14:35.025 fused_ordering(735) 00:14:35.025 fused_ordering(736) 00:14:35.025 fused_ordering(737) 00:14:35.025 fused_ordering(738) 00:14:35.025 fused_ordering(739) 00:14:35.025 fused_ordering(740) 00:14:35.025 fused_ordering(741) 00:14:35.025 fused_ordering(742) 00:14:35.025 fused_ordering(743) 00:14:35.025 fused_ordering(744) 00:14:35.025 fused_ordering(745) 00:14:35.025 fused_ordering(746) 00:14:35.025 fused_ordering(747) 00:14:35.025 fused_ordering(748) 00:14:35.025 fused_ordering(749) 00:14:35.025 fused_ordering(750) 00:14:35.025 fused_ordering(751) 00:14:35.025 fused_ordering(752) 00:14:35.025 fused_ordering(753) 00:14:35.025 fused_ordering(754) 00:14:35.025 fused_ordering(755) 00:14:35.025 fused_ordering(756) 00:14:35.025 fused_ordering(757) 00:14:35.025 fused_ordering(758) 00:14:35.025 fused_ordering(759) 00:14:35.025 fused_ordering(760) 00:14:35.025 fused_ordering(761) 00:14:35.025 fused_ordering(762) 00:14:35.025 fused_ordering(763) 00:14:35.025 fused_ordering(764) 00:14:35.025 fused_ordering(765) 00:14:35.025 fused_ordering(766) 00:14:35.025 fused_ordering(767) 00:14:35.025 fused_ordering(768) 00:14:35.025 fused_ordering(769) 00:14:35.025 fused_ordering(770) 00:14:35.025 fused_ordering(771) 00:14:35.025 fused_ordering(772) 00:14:35.025 fused_ordering(773) 00:14:35.025 fused_ordering(774) 00:14:35.025 fused_ordering(775) 00:14:35.025 fused_ordering(776) 00:14:35.025 fused_ordering(777) 00:14:35.025 fused_ordering(778) 00:14:35.025 fused_ordering(779) 00:14:35.025 fused_ordering(780) 00:14:35.025 fused_ordering(781) 00:14:35.025 fused_ordering(782) 00:14:35.025 fused_ordering(783) 00:14:35.025 fused_ordering(784) 00:14:35.025 fused_ordering(785) 00:14:35.025 fused_ordering(786) 00:14:35.025 fused_ordering(787) 00:14:35.025 fused_ordering(788) 00:14:35.025 fused_ordering(789) 00:14:35.025 fused_ordering(790) 00:14:35.025 fused_ordering(791) 00:14:35.025 fused_ordering(792) 00:14:35.025 fused_ordering(793) 00:14:35.025 fused_ordering(794) 00:14:35.025 fused_ordering(795) 00:14:35.025 fused_ordering(796) 00:14:35.025 fused_ordering(797) 00:14:35.025 fused_ordering(798) 00:14:35.025 fused_ordering(799) 00:14:35.025 fused_ordering(800) 00:14:35.025 fused_ordering(801) 00:14:35.025 fused_ordering(802) 00:14:35.025 fused_ordering(803) 00:14:35.025 fused_ordering(804) 00:14:35.025 fused_ordering(805) 00:14:35.025 fused_ordering(806) 00:14:35.025 fused_ordering(807) 00:14:35.025 fused_ordering(808) 00:14:35.025 fused_ordering(809) 00:14:35.025 fused_ordering(810) 00:14:35.025 fused_ordering(811) 00:14:35.025 fused_ordering(812) 00:14:35.025 fused_ordering(813) 00:14:35.025 fused_ordering(814) 00:14:35.025 fused_ordering(815) 00:14:35.025 fused_ordering(816) 00:14:35.025 fused_ordering(817) 00:14:35.025 fused_ordering(818) 00:14:35.025 fused_ordering(819) 00:14:35.025 fused_ordering(820) 00:14:35.956 fused_ordering(821) 00:14:35.956 fused_ordering(822) 00:14:35.956 fused_ordering(823) 00:14:35.956 fused_ordering(824) 00:14:35.956 fused_ordering(825) 00:14:35.956 fused_ordering(826) 00:14:35.956 fused_ordering(827) 00:14:35.956 fused_ordering(828) 00:14:35.956 fused_ordering(829) 00:14:35.956 fused_ordering(830) 00:14:35.956 fused_ordering(831) 00:14:35.956 fused_ordering(832) 00:14:35.956 fused_ordering(833) 00:14:35.956 fused_ordering(834) 00:14:35.956 fused_ordering(835) 00:14:35.956 fused_ordering(836) 00:14:35.956 fused_ordering(837) 00:14:35.956 fused_ordering(838) 00:14:35.956 fused_ordering(839) 00:14:35.956 fused_ordering(840) 00:14:35.956 fused_ordering(841) 00:14:35.956 fused_ordering(842) 00:14:35.956 fused_ordering(843) 00:14:35.956 fused_ordering(844) 00:14:35.956 fused_ordering(845) 00:14:35.956 fused_ordering(846) 00:14:35.956 fused_ordering(847) 00:14:35.956 fused_ordering(848) 00:14:35.956 fused_ordering(849) 00:14:35.956 fused_ordering(850) 00:14:35.956 fused_ordering(851) 00:14:35.956 fused_ordering(852) 00:14:35.956 fused_ordering(853) 00:14:35.956 fused_ordering(854) 00:14:35.956 fused_ordering(855) 00:14:35.956 fused_ordering(856) 00:14:35.956 fused_ordering(857) 00:14:35.956 fused_ordering(858) 00:14:35.956 fused_ordering(859) 00:14:35.956 fused_ordering(860) 00:14:35.956 fused_ordering(861) 00:14:35.956 fused_ordering(862) 00:14:35.956 fused_ordering(863) 00:14:35.956 fused_ordering(864) 00:14:35.956 fused_ordering(865) 00:14:35.956 fused_ordering(866) 00:14:35.956 fused_ordering(867) 00:14:35.956 fused_ordering(868) 00:14:35.956 fused_ordering(869) 00:14:35.956 fused_ordering(870) 00:14:35.956 fused_ordering(871) 00:14:35.956 fused_ordering(872) 00:14:35.956 fused_ordering(873) 00:14:35.956 fused_ordering(874) 00:14:35.956 fused_ordering(875) 00:14:35.956 fused_ordering(876) 00:14:35.956 fused_ordering(877) 00:14:35.956 fused_ordering(878) 00:14:35.956 fused_ordering(879) 00:14:35.956 fused_ordering(880) 00:14:35.956 fused_ordering(881) 00:14:35.956 fused_ordering(882) 00:14:35.956 fused_ordering(883) 00:14:35.956 fused_ordering(884) 00:14:35.956 fused_ordering(885) 00:14:35.956 fused_ordering(886) 00:14:35.956 fused_ordering(887) 00:14:35.956 fused_ordering(888) 00:14:35.956 fused_ordering(889) 00:14:35.956 fused_ordering(890) 00:14:35.956 fused_ordering(891) 00:14:35.956 fused_ordering(892) 00:14:35.956 fused_ordering(893) 00:14:35.956 fused_ordering(894) 00:14:35.956 fused_ordering(895) 00:14:35.956 fused_ordering(896) 00:14:35.956 fused_ordering(897) 00:14:35.956 fused_ordering(898) 00:14:35.956 fused_ordering(899) 00:14:35.956 fused_ordering(900) 00:14:35.956 fused_ordering(901) 00:14:35.956 fused_ordering(902) 00:14:35.956 fused_ordering(903) 00:14:35.956 fused_ordering(904) 00:14:35.956 fused_ordering(905) 00:14:35.956 fused_ordering(906) 00:14:35.956 fused_ordering(907) 00:14:35.956 fused_ordering(908) 00:14:35.956 fused_ordering(909) 00:14:35.956 fused_ordering(910) 00:14:35.956 fused_ordering(911) 00:14:35.956 fused_ordering(912) 00:14:35.956 fused_ordering(913) 00:14:35.956 fused_ordering(914) 00:14:35.956 fused_ordering(915) 00:14:35.956 fused_ordering(916) 00:14:35.956 fused_ordering(917) 00:14:35.956 fused_ordering(918) 00:14:35.956 fused_ordering(919) 00:14:35.956 fused_ordering(920) 00:14:35.956 fused_ordering(921) 00:14:35.956 fused_ordering(922) 00:14:35.956 fused_ordering(923) 00:14:35.956 fused_ordering(924) 00:14:35.956 fused_ordering(925) 00:14:35.956 fused_ordering(926) 00:14:35.956 fused_ordering(927) 00:14:35.956 fused_ordering(928) 00:14:35.956 fused_ordering(929) 00:14:35.956 fused_ordering(930) 00:14:35.956 fused_ordering(931) 00:14:35.956 fused_ordering(932) 00:14:35.956 fused_ordering(933) 00:14:35.956 fused_ordering(934) 00:14:35.956 fused_ordering(935) 00:14:35.956 fused_ordering(936) 00:14:35.956 fused_ordering(937) 00:14:35.956 fused_ordering(938) 00:14:35.956 fused_ordering(939) 00:14:35.956 fused_ordering(940) 00:14:35.956 fused_ordering(941) 00:14:35.956 fused_ordering(942) 00:14:35.956 fused_ordering(943) 00:14:35.956 fused_ordering(944) 00:14:35.956 fused_ordering(945) 00:14:35.956 fused_ordering(946) 00:14:35.956 fused_ordering(947) 00:14:35.956 fused_ordering(948) 00:14:35.956 fused_ordering(949) 00:14:35.956 fused_ordering(950) 00:14:35.956 fused_ordering(951) 00:14:35.956 fused_ordering(952) 00:14:35.956 fused_ordering(953) 00:14:35.956 fused_ordering(954) 00:14:35.956 fused_ordering(955) 00:14:35.956 fused_ordering(956) 00:14:35.956 fused_ordering(957) 00:14:35.956 fused_ordering(958) 00:14:35.956 fused_ordering(959) 00:14:35.956 fused_ordering(960) 00:14:35.956 fused_ordering(961) 00:14:35.956 fused_ordering(962) 00:14:35.956 fused_ordering(963) 00:14:35.956 fused_ordering(964) 00:14:35.956 fused_ordering(965) 00:14:35.956 fused_ordering(966) 00:14:35.956 fused_ordering(967) 00:14:35.956 fused_ordering(968) 00:14:35.956 fused_ordering(969) 00:14:35.956 fused_ordering(970) 00:14:35.956 fused_ordering(971) 00:14:35.956 fused_ordering(972) 00:14:35.956 fused_ordering(973) 00:14:35.956 fused_ordering(974) 00:14:35.956 fused_ordering(975) 00:14:35.956 fused_ordering(976) 00:14:35.956 fused_ordering(977) 00:14:35.956 fused_ordering(978) 00:14:35.956 fused_ordering(979) 00:14:35.956 fused_ordering(980) 00:14:35.956 fused_ordering(981) 00:14:35.956 fused_ordering(982) 00:14:35.956 fused_ordering(983) 00:14:35.956 fused_ordering(984) 00:14:35.956 fused_ordering(985) 00:14:35.956 fused_ordering(986) 00:14:35.956 fused_ordering(987) 00:14:35.956 fused_ordering(988) 00:14:35.956 fused_ordering(989) 00:14:35.956 fused_ordering(990) 00:14:35.956 fused_ordering(991) 00:14:35.956 fused_ordering(992) 00:14:35.956 fused_ordering(993) 00:14:35.956 fused_ordering(994) 00:14:35.956 fused_ordering(995) 00:14:35.956 fused_ordering(996) 00:14:35.956 fused_ordering(997) 00:14:35.956 fused_ordering(998) 00:14:35.956 fused_ordering(999) 00:14:35.956 fused_ordering(1000) 00:14:35.956 fused_ordering(1001) 00:14:35.956 fused_ordering(1002) 00:14:35.956 fused_ordering(1003) 00:14:35.956 fused_ordering(1004) 00:14:35.956 fused_ordering(1005) 00:14:35.956 fused_ordering(1006) 00:14:35.956 fused_ordering(1007) 00:14:35.956 fused_ordering(1008) 00:14:35.956 fused_ordering(1009) 00:14:35.956 fused_ordering(1010) 00:14:35.956 fused_ordering(1011) 00:14:35.956 fused_ordering(1012) 00:14:35.956 fused_ordering(1013) 00:14:35.956 fused_ordering(1014) 00:14:35.956 fused_ordering(1015) 00:14:35.956 fused_ordering(1016) 00:14:35.956 fused_ordering(1017) 00:14:35.956 fused_ordering(1018) 00:14:35.956 fused_ordering(1019) 00:14:35.956 fused_ordering(1020) 00:14:35.956 fused_ordering(1021) 00:14:35.956 fused_ordering(1022) 00:14:35.956 fused_ordering(1023) 00:14:35.956 22:44:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:35.956 22:44:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:35.956 22:44:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:35.956 22:44:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:35.956 22:44:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:35.956 22:44:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:35.956 22:44:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:35.956 22:44:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:35.956 rmmod nvme_tcp 00:14:35.956 rmmod nvme_fabrics 00:14:35.956 rmmod nvme_keyring 00:14:35.956 22:44:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:35.956 22:44:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:35.956 22:44:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:35.956 22:44:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3492860 ']' 00:14:35.957 22:44:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3492860 00:14:35.957 22:44:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 3492860 ']' 00:14:35.957 22:44:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 3492860 00:14:35.957 22:44:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:14:35.957 22:44:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:35.957 22:44:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3492860 00:14:35.957 22:44:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:35.957 22:44:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:35.957 22:44:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3492860' 00:14:35.957 killing process with pid 3492860 00:14:35.957 22:44:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 3492860 00:14:35.957 22:44:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 3492860 00:14:36.214 22:44:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:36.214 22:44:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:36.214 22:44:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:36.214 22:44:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:36.214 22:44:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:36.214 22:44:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.214 22:44:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:36.214 22:44:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.117 22:44:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:38.117 00:14:38.117 real 0m8.370s 00:14:38.117 user 0m4.844s 00:14:38.117 sys 0m4.451s 00:14:38.117 22:44:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:38.117 22:44:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:38.117 ************************************ 00:14:38.117 END TEST nvmf_fused_ordering 00:14:38.117 ************************************ 00:14:38.117 22:44:30 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:38.117 22:44:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:38.117 22:44:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:38.117 22:44:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:38.117 ************************************ 00:14:38.117 START TEST nvmf_delete_subsystem 00:14:38.117 ************************************ 00:14:38.117 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:38.376 * Looking for test storage... 00:14:38.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:38.376 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:38.376 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:14:38.376 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:38.376 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:38.376 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:38.376 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:38.376 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:38.376 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:38.376 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:38.376 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:38.376 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:38.376 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:38.376 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:38.376 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:38.376 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:38.376 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:38.376 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:38.376 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:38.376 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:38.376 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:38.376 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:38.376 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:38.376 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.376 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.376 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.376 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:14:38.377 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.377 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:14:38.377 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:38.377 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:38.377 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:38.377 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:38.377 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:38.377 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:38.377 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:38.377 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:38.377 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:38.377 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:38.377 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:38.377 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:38.377 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:38.377 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:38.377 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.377 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:38.377 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.377 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:38.377 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:38.377 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:38.377 22:44:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:40.277 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:40.277 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:40.277 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:40.277 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:40.278 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:40.278 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:40.278 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:40.278 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:40.278 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:14:40.278 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:40.278 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:40.278 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:40.278 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:40.278 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:40.278 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:40.278 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:40.278 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:40.278 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:40.278 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:40.278 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:40.278 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:40.278 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:40.537 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:40.537 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:40.537 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:40.537 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:40.537 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:40.537 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:40.537 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:40.537 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:40.537 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:40.537 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:40.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:40.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:14:40.537 00:14:40.537 --- 10.0.0.2 ping statistics --- 00:14:40.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.537 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:14:40.537 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:40.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:40.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:14:40.537 00:14:40.537 --- 10.0.0.1 ping statistics --- 00:14:40.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.537 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:14:40.537 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:40.537 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:14:40.537 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:40.537 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:40.537 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:40.537 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:40.537 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:40.537 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:40.537 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:40.537 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:40.537 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:40.537 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:40.537 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:40.537 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3495214 00:14:40.537 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:40.537 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3495214 00:14:40.537 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 3495214 ']' 00:14:40.537 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.537 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:40.537 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.537 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:40.537 22:44:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:40.537 [2024-07-26 22:44:32.989173] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:40.537 [2024-07-26 22:44:32.989259] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.537 EAL: No free 2048 kB hugepages reported on node 1 00:14:40.795 [2024-07-26 22:44:33.056295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:40.795 [2024-07-26 22:44:33.141190] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:40.795 [2024-07-26 22:44:33.141242] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:40.795 [2024-07-26 22:44:33.141271] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:40.795 [2024-07-26 22:44:33.141287] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:40.795 [2024-07-26 22:44:33.141297] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:40.795 [2024-07-26 22:44:33.141352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.795 [2024-07-26 22:44:33.141357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.795 22:44:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:40.795 22:44:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:14:40.795 22:44:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:40.795 22:44:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:40.795 22:44:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:40.795 22:44:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:40.795 22:44:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:40.795 22:44:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.795 22:44:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:40.795 [2024-07-26 22:44:33.274893] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:40.795 22:44:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.795 22:44:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:40.795 22:44:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.795 22:44:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:40.795 22:44:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.795 22:44:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:40.795 22:44:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.795 22:44:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:40.795 [2024-07-26 22:44:33.291176] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:40.795 22:44:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.795 22:44:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:40.795 22:44:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.795 22:44:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:41.052 NULL1 00:14:41.052 22:44:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.052 22:44:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:41.052 22:44:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.052 22:44:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:41.052 Delay0 00:14:41.052 22:44:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.052 22:44:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:41.052 22:44:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.052 22:44:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:41.052 22:44:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.052 22:44:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3495241 00:14:41.052 22:44:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:41.052 22:44:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:41.052 EAL: No free 2048 kB hugepages reported on node 1 00:14:41.052 [2024-07-26 22:44:33.365902] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:42.945 22:44:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:42.945 22:44:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.946 22:44:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 starting I/O failed: -6 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 starting I/O failed: -6 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Write completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Write completed with error (sct=0, sc=8) 00:14:43.203 starting I/O failed: -6 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Write completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 starting I/O failed: -6 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 starting I/O failed: -6 00:14:43.203 Write completed with error (sct=0, sc=8) 00:14:43.203 Write completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Write completed with error (sct=0, sc=8) 00:14:43.203 starting I/O failed: -6 00:14:43.203 Write completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Write completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 starting I/O failed: -6 00:14:43.203 Write completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 starting I/O failed: -6 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 starting I/O failed: -6 00:14:43.203 Write completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 starting I/O failed: -6 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Write completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 starting I/O failed: -6 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 starting I/O failed: -6 00:14:43.203 Write completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 starting I/O failed: -6 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 starting I/O failed: -6 00:14:43.203 Write completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 starting I/O failed: -6 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 starting I/O failed: -6 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Write completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Write completed with error (sct=0, sc=8) 00:14:43.203 starting I/O failed: -6 00:14:43.203 Write completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 starting I/O failed: -6 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 starting I/O failed: -6 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Write completed with error (sct=0, sc=8) 00:14:43.203 starting I/O failed: -6 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 starting I/O failed: -6 00:14:43.203 Write completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 starting I/O failed: -6 00:14:43.203 Write completed with error (sct=0, sc=8) 00:14:43.203 Write completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 starting I/O failed: -6 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Write completed with error (sct=0, sc=8) 00:14:43.203 starting I/O failed: -6 00:14:43.203 Write completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 starting I/O failed: -6 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Write completed with error (sct=0, sc=8) 00:14:43.203 starting I/O failed: -6 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 starting I/O failed: -6 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 starting I/O failed: -6 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Write completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 starting I/O failed: -6 00:14:43.203 Write completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 starting I/O failed: -6 00:14:43.203 Write completed with error (sct=0, sc=8) 00:14:43.203 Write completed with error (sct=0, sc=8) 00:14:43.203 starting I/O failed: -6 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 starting I/O failed: -6 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Write completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 starting I/O failed: -6 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 starting I/O failed: -6 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 starting I/O failed: -6 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 starting I/O failed: -6 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.203 Write completed with error (sct=0, sc=8) 00:14:43.203 starting I/O failed: -6 00:14:43.203 Write completed with error (sct=0, sc=8) 00:14:43.203 Read completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 starting I/O failed: -6 00:14:43.204 Write completed with error (sct=0, sc=8) 00:14:43.204 Write completed with error (sct=0, sc=8) 00:14:43.204 starting I/O failed: -6 00:14:43.204 Write completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 starting I/O failed: -6 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Write completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 starting I/O failed: -6 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 starting I/O failed: -6 00:14:43.204 Write completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 starting I/O failed: -6 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Write completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 starting I/O failed: -6 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 starting I/O failed: -6 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 starting I/O failed: -6 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Write completed with error (sct=0, sc=8) 00:14:43.204 starting I/O failed: -6 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 starting I/O failed: -6 00:14:43.204 Write completed with error (sct=0, sc=8) 00:14:43.204 Write completed with error (sct=0, sc=8) 00:14:43.204 starting I/O failed: -6 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 starting I/O failed: -6 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 [2024-07-26 22:44:35.538416] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2072180 is same with the state(5) to be set 00:14:43.204 starting I/O failed: -6 00:14:43.204 Write completed with error (sct=0, sc=8) 00:14:43.204 Write completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Write completed with error (sct=0, sc=8) 00:14:43.204 Write completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 starting I/O failed: -6 00:14:43.204 Write completed with error (sct=0, sc=8) 00:14:43.204 Write completed with error (sct=0, sc=8) 00:14:43.204 starting I/O failed: -6 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 starting I/O failed: -6 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 starting I/O failed: -6 00:14:43.204 Write completed with error (sct=0, sc=8) 00:14:43.204 Write completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 starting I/O failed: -6 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Write completed with error (sct=0, sc=8) 00:14:43.204 Write completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Write completed with error (sct=0, sc=8) 00:14:43.204 Write completed with error (sct=0, sc=8) 00:14:43.204 Write completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Write completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Write completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Write completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Write completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Write completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Write completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Write completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:43.204 Write completed with error (sct=0, sc=8) 00:14:43.204 Read completed with error (sct=0, sc=8) 00:14:44.136 [2024-07-26 22:44:36.504650] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20758b0 is same with the state(5) to be set 00:14:44.136 Write completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Write completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Write completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Write completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Write completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Write completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Write completed with error (sct=0, sc=8) 00:14:44.136 Write completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Write completed with error (sct=0, sc=8) 00:14:44.136 Write completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Write completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 [2024-07-26 22:44:36.539447] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fba2800bfe0 is same with the state(5) to be set 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Write completed with error (sct=0, sc=8) 00:14:44.136 Write completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Write completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Write completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Write completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.136 Write completed with error (sct=0, sc=8) 00:14:44.136 Read completed with error (sct=0, sc=8) 00:14:44.137 Write completed with error (sct=0, sc=8) 00:14:44.137 Write completed with error (sct=0, sc=8) 00:14:44.137 Write completed with error (sct=0, sc=8) 00:14:44.137 Write completed with error (sct=0, sc=8) 00:14:44.137 Write completed with error (sct=0, sc=8) 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 [2024-07-26 22:44:36.539712] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fba2800c600 is same with the state(5) to be set 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 Write completed with error (sct=0, sc=8) 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 Write completed with error (sct=0, sc=8) 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 Write completed with error (sct=0, sc=8) 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 Write completed with error (sct=0, sc=8) 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 Write completed with error (sct=0, sc=8) 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 Write completed with error (sct=0, sc=8) 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 [2024-07-26 22:44:36.539879] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2072360 is same with the state(5) to be set 00:14:44.137 Write completed with error (sct=0, sc=8) 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 Write completed with error (sct=0, sc=8) 00:14:44.137 Write completed with error (sct=0, sc=8) 00:14:44.137 Write completed with error (sct=0, sc=8) 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 Write completed with error (sct=0, sc=8) 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 Write completed with error (sct=0, sc=8) 00:14:44.137 Write completed with error (sct=0, sc=8) 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 Write completed with error (sct=0, sc=8) 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 Read completed with error (sct=0, sc=8) 00:14:44.137 [2024-07-26 22:44:36.540307] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2072aa0 is same with the state(5) to be set 00:14:44.137 Initializing NVMe Controllers 00:14:44.137 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:44.137 Controller IO queue size 128, less than required. 00:14:44.137 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:44.137 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:44.137 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:44.137 Initialization complete. Launching workers. 00:14:44.137 ======================================================== 00:14:44.137 Latency(us) 00:14:44.137 Device Information : IOPS MiB/s Average min max 00:14:44.137 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 166.23 0.08 904468.47 634.81 1013002.15 00:14:44.137 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 186.08 0.09 906295.23 597.78 1011804.95 00:14:44.137 ======================================================== 00:14:44.137 Total : 352.31 0.17 905433.31 597.78 1013002.15 00:14:44.137 00:14:44.137 [2024-07-26 22:44:36.541297] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20758b0 (9): Bad file descriptor 00:14:44.137 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:44.137 22:44:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.137 22:44:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:14:44.137 22:44:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3495241 00:14:44.137 22:44:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:44.702 22:44:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:44.702 22:44:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3495241 00:14:44.702 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3495241) - No such process 00:14:44.702 22:44:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3495241 00:14:44.702 22:44:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:14:44.702 22:44:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 3495241 00:14:44.702 22:44:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:14:44.702 22:44:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:44.702 22:44:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:14:44.702 22:44:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:44.702 22:44:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 3495241 00:14:44.702 22:44:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:14:44.702 22:44:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:44.702 22:44:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:44.702 22:44:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:44.702 22:44:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:44.702 22:44:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.702 22:44:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:44.702 22:44:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.702 22:44:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:44.702 22:44:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.702 22:44:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:44.702 [2024-07-26 22:44:37.064079] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:44.702 22:44:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.702 22:44:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:44.702 22:44:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.702 22:44:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:44.702 22:44:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.702 22:44:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3495763 00:14:44.702 22:44:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:14:44.702 22:44:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3495763 00:14:44.702 22:44:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:44.702 22:44:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:44.702 EAL: No free 2048 kB hugepages reported on node 1 00:14:44.702 [2024-07-26 22:44:37.128770] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:45.267 22:44:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:45.267 22:44:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3495763 00:14:45.267 22:44:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:45.831 22:44:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:45.831 22:44:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3495763 00:14:45.831 22:44:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:46.089 22:44:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:46.089 22:44:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3495763 00:14:46.089 22:44:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:46.653 22:44:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:46.653 22:44:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3495763 00:14:46.653 22:44:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:47.218 22:44:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:47.218 22:44:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3495763 00:14:47.218 22:44:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:47.784 22:44:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:47.784 22:44:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3495763 00:14:47.784 22:44:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:47.784 Initializing NVMe Controllers 00:14:47.784 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:47.784 Controller IO queue size 128, less than required. 00:14:47.784 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:47.784 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:47.784 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:47.784 Initialization complete. Launching workers. 00:14:47.784 ======================================================== 00:14:47.784 Latency(us) 00:14:47.784 Device Information : IOPS MiB/s Average min max 00:14:47.784 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004649.10 1000224.40 1014435.36 00:14:47.784 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005616.08 1000279.12 1043068.12 00:14:47.784 ======================================================== 00:14:47.784 Total : 256.00 0.12 1005132.59 1000224.40 1043068.12 00:14:47.784 00:14:48.349 22:44:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:48.349 22:44:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3495763 00:14:48.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3495763) - No such process 00:14:48.349 22:44:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3495763 00:14:48.349 22:44:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:48.349 22:44:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:48.349 22:44:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:48.349 22:44:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:14:48.349 22:44:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:48.349 22:44:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:14:48.349 22:44:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:48.349 22:44:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:48.349 rmmod nvme_tcp 00:14:48.349 rmmod nvme_fabrics 00:14:48.349 rmmod nvme_keyring 00:14:48.349 22:44:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:48.349 22:44:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:14:48.350 22:44:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:14:48.350 22:44:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3495214 ']' 00:14:48.350 22:44:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3495214 00:14:48.350 22:44:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 3495214 ']' 00:14:48.350 22:44:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 3495214 00:14:48.350 22:44:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:14:48.350 22:44:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:48.350 22:44:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3495214 00:14:48.350 22:44:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:48.350 22:44:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:48.350 22:44:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3495214' 00:14:48.350 killing process with pid 3495214 00:14:48.350 22:44:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 3495214 00:14:48.350 22:44:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 3495214 00:14:48.609 22:44:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:48.609 22:44:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:48.609 22:44:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:48.609 22:44:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:48.609 22:44:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:48.609 22:44:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.609 22:44:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:48.609 22:44:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.522 22:44:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:50.522 00:14:50.522 real 0m12.349s 00:14:50.522 user 0m27.820s 00:14:50.522 sys 0m2.983s 00:14:50.522 22:44:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:50.522 22:44:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:50.522 ************************************ 00:14:50.522 END TEST nvmf_delete_subsystem 00:14:50.522 ************************************ 00:14:50.522 22:44:42 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:50.522 22:44:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:50.522 22:44:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:50.522 22:44:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:50.522 ************************************ 00:14:50.522 START TEST nvmf_ns_masking 00:14:50.522 ************************************ 00:14:50.522 22:44:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:50.781 * Looking for test storage... 00:14:50.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:50.781 22:44:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:50.781 22:44:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:50.781 22:44:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:50.781 22:44:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:50.781 22:44:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:50.781 22:44:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=72102327-6323-4be2-b822-e8430b085d2c 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:50.782 22:44:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:52.702 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:52.703 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:52.703 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:52.703 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:52.703 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:52.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:52.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:14:52.703 00:14:52.703 --- 10.0.0.2 ping statistics --- 00:14:52.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.703 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:52.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:52.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:14:52.703 00:14:52.703 --- 10.0.0.1 ping statistics --- 00:14:52.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.703 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3498103 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3498103 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 3498103 ']' 00:14:52.703 22:44:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.704 22:44:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:52.704 22:44:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.704 22:44:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:52.704 22:44:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:52.977 [2024-07-26 22:44:45.227439] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:52.977 [2024-07-26 22:44:45.227531] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:52.977 EAL: No free 2048 kB hugepages reported on node 1 00:14:52.977 [2024-07-26 22:44:45.294179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:52.977 [2024-07-26 22:44:45.385379] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:52.977 [2024-07-26 22:44:45.385440] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:52.977 [2024-07-26 22:44:45.385465] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:52.977 [2024-07-26 22:44:45.385479] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:52.977 [2024-07-26 22:44:45.385491] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:52.977 [2024-07-26 22:44:45.385577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.977 [2024-07-26 22:44:45.385631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:52.977 [2024-07-26 22:44:45.385748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:52.977 [2024-07-26 22:44:45.385750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.234 22:44:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:53.234 22:44:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:14:53.234 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:53.234 22:44:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:53.234 22:44:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:53.234 22:44:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:53.234 22:44:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:53.491 [2024-07-26 22:44:45.815905] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:53.491 22:44:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:14:53.491 22:44:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:14:53.491 22:44:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:53.749 Malloc1 00:14:53.749 22:44:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:54.006 Malloc2 00:14:54.006 22:44:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:54.262 22:44:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:54.519 22:44:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:54.775 [2024-07-26 22:44:47.102578] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:54.775 22:44:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:14:54.775 22:44:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 72102327-6323-4be2-b822-e8430b085d2c -a 10.0.0.2 -s 4420 -i 4 00:14:55.032 22:44:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:14:55.032 22:44:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:55.032 22:44:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:55.032 22:44:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:55.032 22:44:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:56.926 22:44:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:56.926 22:44:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:56.926 22:44:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:56.926 22:44:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:56.927 22:44:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:56.927 22:44:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:56.927 22:44:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:56.927 22:44:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:56.927 22:44:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:56.927 22:44:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:56.927 22:44:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:14:56.927 22:44:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:56.927 22:44:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:56.927 [ 0]:0x1 00:14:56.927 22:44:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:56.927 22:44:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:56.927 22:44:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=bec5b3c75a384f0985b6bda19f0dfbb3 00:14:56.927 22:44:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ bec5b3c75a384f0985b6bda19f0dfbb3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:56.927 22:44:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:57.184 22:44:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:14:57.184 22:44:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:57.184 22:44:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:57.184 [ 0]:0x1 00:14:57.184 22:44:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:57.184 22:44:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:57.441 22:44:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=bec5b3c75a384f0985b6bda19f0dfbb3 00:14:57.441 22:44:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ bec5b3c75a384f0985b6bda19f0dfbb3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:57.441 22:44:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:14:57.441 22:44:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:57.441 22:44:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:57.441 [ 1]:0x2 00:14:57.441 22:44:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:57.441 22:44:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:57.441 22:44:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9aca4835a75844c2bc84fbe51a6d2c2b 00:14:57.441 22:44:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9aca4835a75844c2bc84fbe51a6d2c2b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:57.441 22:44:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:14:57.441 22:44:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:57.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.699 22:44:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:57.956 22:44:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:58.214 22:44:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:14:58.214 22:44:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 72102327-6323-4be2-b822-e8430b085d2c -a 10.0.0.2 -s 4420 -i 4 00:14:58.471 22:44:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:58.471 22:44:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:58.471 22:44:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:58.471 22:44:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:14:58.471 22:44:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:14:58.471 22:44:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:15:00.367 22:44:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:00.367 22:44:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:00.367 22:44:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:00.367 22:44:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:00.367 22:44:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:00.367 22:44:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:15:00.367 22:44:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:00.367 22:44:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:00.367 22:44:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:00.367 22:44:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:00.367 22:44:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:15:00.367 22:44:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:00.367 22:44:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:00.367 22:44:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:00.367 22:44:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:00.367 22:44:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:00.367 22:44:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:00.367 22:44:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:00.367 22:44:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:00.367 22:44:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:00.367 22:44:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:00.368 22:44:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:00.625 22:44:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:00.625 22:44:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:00.625 22:44:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:00.625 22:44:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:00.625 22:44:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:00.625 22:44:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:00.625 22:44:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:15:00.625 22:44:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:00.625 22:44:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:00.625 [ 0]:0x2 00:15:00.625 22:44:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:00.625 22:44:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:00.625 22:44:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9aca4835a75844c2bc84fbe51a6d2c2b 00:15:00.625 22:44:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9aca4835a75844c2bc84fbe51a6d2c2b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:00.625 22:44:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:00.883 22:44:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:15:00.883 22:44:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:00.883 22:44:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:00.883 [ 0]:0x1 00:15:00.883 22:44:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:00.883 22:44:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:00.883 22:44:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=bec5b3c75a384f0985b6bda19f0dfbb3 00:15:00.883 22:44:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ bec5b3c75a384f0985b6bda19f0dfbb3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:00.883 22:44:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:15:00.883 22:44:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:00.883 22:44:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:00.883 [ 1]:0x2 00:15:00.883 22:44:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:00.883 22:44:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:00.883 22:44:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9aca4835a75844c2bc84fbe51a6d2c2b 00:15:00.883 22:44:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9aca4835a75844c2bc84fbe51a6d2c2b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:00.883 22:44:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:01.141 22:44:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:15:01.141 22:44:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:01.141 22:44:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:01.141 22:44:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:01.141 22:44:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:01.141 22:44:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:01.141 22:44:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:01.141 22:44:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:01.141 22:44:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:01.141 22:44:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:01.141 22:44:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:01.141 22:44:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:01.398 22:44:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:01.399 22:44:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:01.399 22:44:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:01.399 22:44:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:01.399 22:44:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:01.399 22:44:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:01.399 22:44:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:15:01.399 22:44:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:01.399 22:44:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:01.399 [ 0]:0x2 00:15:01.399 22:44:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:01.399 22:44:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:01.399 22:44:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9aca4835a75844c2bc84fbe51a6d2c2b 00:15:01.399 22:44:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9aca4835a75844c2bc84fbe51a6d2c2b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:01.399 22:44:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:15:01.399 22:44:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:01.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.399 22:44:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:01.656 22:44:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:15:01.656 22:44:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 72102327-6323-4be2-b822-e8430b085d2c -a 10.0.0.2 -s 4420 -i 4 00:15:01.914 22:44:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:01.914 22:44:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:15:01.914 22:44:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:01.914 22:44:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:15:01.914 22:44:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:15:01.914 22:44:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:15:03.812 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:03.812 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:03.812 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:03.812 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:15:03.812 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:03.812 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:15:03.812 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:03.812 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:03.812 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:03.812 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:03.812 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:15:03.812 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:03.812 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:03.812 [ 0]:0x1 00:15:03.812 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:03.812 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:03.812 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=bec5b3c75a384f0985b6bda19f0dfbb3 00:15:03.812 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ bec5b3c75a384f0985b6bda19f0dfbb3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:03.812 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:15:03.812 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:03.812 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:03.812 [ 1]:0x2 00:15:03.812 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:03.812 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:04.070 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9aca4835a75844c2bc84fbe51a6d2c2b 00:15:04.070 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9aca4835a75844c2bc84fbe51a6d2c2b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:04.070 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:04.327 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:15:04.327 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:04.327 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:04.327 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:04.327 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:04.327 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:04.327 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:04.327 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:04.327 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:04.328 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:04.328 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:04.328 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:04.328 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:04.328 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:04.328 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:04.328 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:04.328 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:04.328 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:04.328 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:15:04.328 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:04.328 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:04.328 [ 0]:0x2 00:15:04.328 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:04.328 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:04.328 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9aca4835a75844c2bc84fbe51a6d2c2b 00:15:04.328 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9aca4835a75844c2bc84fbe51a6d2c2b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:04.328 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:04.328 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:04.328 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:04.328 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:04.328 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:04.328 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:04.328 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:04.328 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:04.328 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:04.328 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:04.328 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:04.328 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:04.585 [2024-07-26 22:44:56.902655] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:04.585 request: 00:15:04.585 { 00:15:04.586 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:04.586 "nsid": 2, 00:15:04.586 "host": "nqn.2016-06.io.spdk:host1", 00:15:04.586 "method": "nvmf_ns_remove_host", 00:15:04.586 "req_id": 1 00:15:04.586 } 00:15:04.586 Got JSON-RPC error response 00:15:04.586 response: 00:15:04.586 { 00:15:04.586 "code": -32602, 00:15:04.586 "message": "Invalid parameters" 00:15:04.586 } 00:15:04.586 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:04.586 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:04.586 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:04.586 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:04.586 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:15:04.586 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:04.586 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:04.586 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:04.586 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:04.586 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:04.586 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:04.586 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:04.586 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:04.586 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:04.586 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:04.586 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:04.586 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:04.586 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:04.586 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:04.586 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:04.586 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:04.586 22:44:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:04.586 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:15:04.586 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:04.586 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:04.586 [ 0]:0x2 00:15:04.586 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:04.586 22:44:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:04.586 22:44:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9aca4835a75844c2bc84fbe51a6d2c2b 00:15:04.586 22:44:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9aca4835a75844c2bc84fbe51a6d2c2b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:04.586 22:44:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:15:04.586 22:44:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:04.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.586 22:44:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:04.844 22:44:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:04.844 22:44:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:15:04.844 22:44:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:04.844 22:44:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:15:04.844 22:44:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:04.844 22:44:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:15:04.844 22:44:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:04.844 22:44:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:04.844 rmmod nvme_tcp 00:15:04.844 rmmod nvme_fabrics 00:15:04.844 rmmod nvme_keyring 00:15:05.102 22:44:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:05.102 22:44:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:15:05.102 22:44:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:15:05.102 22:44:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3498103 ']' 00:15:05.102 22:44:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3498103 00:15:05.102 22:44:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 3498103 ']' 00:15:05.102 22:44:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 3498103 00:15:05.102 22:44:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:15:05.102 22:44:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:05.102 22:44:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3498103 00:15:05.102 22:44:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:05.102 22:44:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:05.102 22:44:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3498103' 00:15:05.102 killing process with pid 3498103 00:15:05.102 22:44:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 3498103 00:15:05.102 22:44:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 3498103 00:15:05.360 22:44:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:05.360 22:44:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:05.360 22:44:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:05.360 22:44:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:05.360 22:44:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:05.360 22:44:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.360 22:44:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.360 22:44:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.267 22:44:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:07.267 00:15:07.267 real 0m16.707s 00:15:07.267 user 0m52.408s 00:15:07.267 sys 0m3.719s 00:15:07.267 22:44:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:07.267 22:44:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:07.267 ************************************ 00:15:07.267 END TEST nvmf_ns_masking 00:15:07.267 ************************************ 00:15:07.267 22:44:59 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:15:07.267 22:44:59 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:07.267 22:44:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:07.267 22:44:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:07.267 22:44:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:07.525 ************************************ 00:15:07.525 START TEST nvmf_nvme_cli 00:15:07.525 ************************************ 00:15:07.525 22:44:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:07.525 * Looking for test storage... 00:15:07.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:07.525 22:44:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:07.525 22:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:07.525 22:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:07.525 22:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:07.525 22:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:07.525 22:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:07.525 22:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:07.525 22:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:07.525 22:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:07.525 22:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:07.525 22:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:07.525 22:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:07.525 22:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:07.525 22:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:07.525 22:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:07.525 22:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:07.525 22:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:07.525 22:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:07.525 22:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:07.525 22:44:59 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:07.525 22:44:59 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:07.525 22:44:59 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:07.525 22:44:59 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.526 22:44:59 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.526 22:44:59 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.526 22:44:59 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:07.526 22:44:59 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.526 22:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:15:07.526 22:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:07.526 22:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:07.526 22:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:07.526 22:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:07.526 22:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:07.526 22:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:07.526 22:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:07.526 22:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:07.526 22:44:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:07.526 22:44:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:07.526 22:44:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:07.526 22:44:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:07.526 22:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:07.526 22:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:07.526 22:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:07.526 22:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:07.526 22:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:07.526 22:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.526 22:44:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:07.526 22:44:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.526 22:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:07.526 22:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:07.526 22:44:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:15:07.526 22:44:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:09.426 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:09.426 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:15:09.426 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:09.426 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:09.427 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:09.427 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:09.427 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:09.427 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:09.427 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:09.427 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:15:09.427 00:15:09.427 --- 10.0.0.2 ping statistics --- 00:15:09.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.427 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:09.427 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:09.427 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:15:09.427 00:15:09.427 --- 10.0.0.1 ping statistics --- 00:15:09.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.427 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:09.427 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:09.686 22:45:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:09.686 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:09.686 22:45:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:09.686 22:45:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:09.686 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3501757 00:15:09.686 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:09.686 22:45:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3501757 00:15:09.686 22:45:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 3501757 ']' 00:15:09.686 22:45:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.686 22:45:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:09.686 22:45:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.686 22:45:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:09.686 22:45:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:09.686 [2024-07-26 22:45:01.985290] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:15:09.686 [2024-07-26 22:45:01.985378] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.686 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.686 [2024-07-26 22:45:02.052638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:09.686 [2024-07-26 22:45:02.144319] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:09.686 [2024-07-26 22:45:02.144389] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:09.686 [2024-07-26 22:45:02.144406] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:09.686 [2024-07-26 22:45:02.144419] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:09.686 [2024-07-26 22:45:02.144431] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:09.686 [2024-07-26 22:45:02.144486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.687 [2024-07-26 22:45:02.144542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:09.687 [2024-07-26 22:45:02.144661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:09.687 [2024-07-26 22:45:02.144664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.945 22:45:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:09.945 22:45:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:15:09.945 22:45:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:09.945 22:45:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:09.945 22:45:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:09.945 22:45:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:09.945 22:45:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:09.945 22:45:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.945 22:45:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:09.945 [2024-07-26 22:45:02.307927] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:09.945 22:45:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.945 22:45:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:09.945 22:45:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.945 22:45:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:09.945 Malloc0 00:15:09.945 22:45:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.945 22:45:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:09.945 22:45:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.945 22:45:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:09.945 Malloc1 00:15:09.945 22:45:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.945 22:45:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:09.945 22:45:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.945 22:45:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:09.945 22:45:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.945 22:45:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:09.945 22:45:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.945 22:45:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:09.945 22:45:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.945 22:45:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:09.945 22:45:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.946 22:45:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:09.946 22:45:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.946 22:45:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:09.946 22:45:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.946 22:45:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:09.946 [2024-07-26 22:45:02.389957] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:09.946 22:45:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.946 22:45:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:09.946 22:45:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.946 22:45:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:09.946 22:45:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.946 22:45:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:15:10.204 00:15:10.204 Discovery Log Number of Records 2, Generation counter 2 00:15:10.204 =====Discovery Log Entry 0====== 00:15:10.204 trtype: tcp 00:15:10.204 adrfam: ipv4 00:15:10.204 subtype: current discovery subsystem 00:15:10.204 treq: not required 00:15:10.204 portid: 0 00:15:10.204 trsvcid: 4420 00:15:10.204 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:10.204 traddr: 10.0.0.2 00:15:10.204 eflags: explicit discovery connections, duplicate discovery information 00:15:10.204 sectype: none 00:15:10.204 =====Discovery Log Entry 1====== 00:15:10.204 trtype: tcp 00:15:10.204 adrfam: ipv4 00:15:10.204 subtype: nvme subsystem 00:15:10.204 treq: not required 00:15:10.204 portid: 0 00:15:10.204 trsvcid: 4420 00:15:10.204 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:10.204 traddr: 10.0.0.2 00:15:10.204 eflags: none 00:15:10.204 sectype: none 00:15:10.204 22:45:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:10.204 22:45:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:10.204 22:45:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:10.204 22:45:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:10.204 22:45:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:10.204 22:45:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:10.204 22:45:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:10.204 22:45:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:10.204 22:45:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:10.204 22:45:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:10.204 22:45:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:10.799 22:45:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:10.799 22:45:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:15:10.799 22:45:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:10.799 22:45:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:15:10.799 22:45:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:15:10.799 22:45:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:15:12.692 22:45:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:12.692 22:45:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:12.692 22:45:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:12.692 22:45:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:15:12.692 22:45:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:12.692 22:45:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:15:12.692 22:45:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:12.692 22:45:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:12.692 22:45:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:12.692 22:45:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:12.949 22:45:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:12.949 22:45:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:12.949 22:45:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:12.949 22:45:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:12.949 22:45:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:12.949 22:45:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:12.949 22:45:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:12.949 22:45:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:12.949 22:45:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:12.949 22:45:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:12.949 22:45:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:12.949 /dev/nvme0n1 ]] 00:15:12.949 22:45:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:12.949 22:45:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:12.949 22:45:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:12.949 22:45:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:12.949 22:45:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:13.206 22:45:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:13.206 22:45:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:13.206 22:45:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:13.206 22:45:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:13.206 22:45:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:13.206 22:45:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:13.206 22:45:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:13.206 22:45:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:13.206 22:45:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:13.206 22:45:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:13.206 22:45:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:13.206 22:45:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:13.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.465 22:45:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:13.465 22:45:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:15:13.465 22:45:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:15:13.465 22:45:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:13.465 22:45:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:15:13.465 22:45:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:13.465 22:45:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:15:13.465 22:45:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:13.465 22:45:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:13.465 22:45:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.465 22:45:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:13.465 22:45:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.465 22:45:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:13.465 22:45:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:13.465 22:45:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:13.465 22:45:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:15:13.465 22:45:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:13.465 22:45:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:15:13.465 22:45:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:13.465 22:45:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:13.465 rmmod nvme_tcp 00:15:13.465 rmmod nvme_fabrics 00:15:13.465 rmmod nvme_keyring 00:15:13.465 22:45:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:13.465 22:45:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:15:13.465 22:45:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:15:13.465 22:45:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3501757 ']' 00:15:13.465 22:45:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3501757 00:15:13.465 22:45:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 3501757 ']' 00:15:13.465 22:45:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 3501757 00:15:13.465 22:45:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:15:13.465 22:45:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:13.465 22:45:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3501757 00:15:13.465 22:45:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:13.465 22:45:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:13.465 22:45:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3501757' 00:15:13.465 killing process with pid 3501757 00:15:13.465 22:45:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 3501757 00:15:13.465 22:45:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 3501757 00:15:13.723 22:45:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:13.723 22:45:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:13.723 22:45:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:13.723 22:45:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:13.723 22:45:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:13.723 22:45:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.723 22:45:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:13.723 22:45:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.255 22:45:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:16.255 00:15:16.255 real 0m8.418s 00:15:16.255 user 0m16.252s 00:15:16.255 sys 0m2.192s 00:15:16.255 22:45:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:16.255 22:45:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:16.255 ************************************ 00:15:16.255 END TEST nvmf_nvme_cli 00:15:16.255 ************************************ 00:15:16.255 22:45:08 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:15:16.255 22:45:08 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:16.255 22:45:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:16.255 22:45:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:16.255 22:45:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:16.255 ************************************ 00:15:16.255 START TEST nvmf_vfio_user 00:15:16.255 ************************************ 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:16.255 * Looking for test storage... 00:15:16.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3502938 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3502938' 00:15:16.255 Process pid: 3502938 00:15:16.255 22:45:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:16.256 22:45:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3502938 00:15:16.256 22:45:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 3502938 ']' 00:15:16.256 22:45:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.256 22:45:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:16.256 22:45:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.256 22:45:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:16.256 22:45:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:16.256 [2024-07-26 22:45:08.349219] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:15:16.256 [2024-07-26 22:45:08.349313] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:16.256 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.256 [2024-07-26 22:45:08.408519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:16.256 [2024-07-26 22:45:08.495738] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:16.256 [2024-07-26 22:45:08.495785] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:16.256 [2024-07-26 22:45:08.495809] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:16.256 [2024-07-26 22:45:08.495822] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:16.256 [2024-07-26 22:45:08.495838] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:16.256 [2024-07-26 22:45:08.495928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.256 [2024-07-26 22:45:08.495988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:16.256 [2024-07-26 22:45:08.496055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:16.256 [2024-07-26 22:45:08.496057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.256 22:45:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:16.256 22:45:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:15:16.256 22:45:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:17.187 22:45:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:17.443 22:45:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:17.443 22:45:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:17.443 22:45:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:17.443 22:45:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:17.443 22:45:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:17.699 Malloc1 00:15:17.699 22:45:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:17.955 22:45:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:18.212 22:45:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:18.469 22:45:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:18.469 22:45:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:18.469 22:45:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:18.726 Malloc2 00:15:18.726 22:45:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:18.983 22:45:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:19.241 22:45:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:19.498 22:45:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:19.498 22:45:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:19.498 22:45:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:19.498 22:45:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:19.498 22:45:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:19.498 22:45:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:19.498 [2024-07-26 22:45:11.944255] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:15:19.498 [2024-07-26 22:45:11.944299] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3503614 ] 00:15:19.498 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.498 [2024-07-26 22:45:11.979449] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:19.498 [2024-07-26 22:45:11.987523] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:19.498 [2024-07-26 22:45:11.987552] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff673062000 00:15:19.498 [2024-07-26 22:45:11.988517] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:19.498 [2024-07-26 22:45:11.989509] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:19.498 [2024-07-26 22:45:11.990518] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:19.498 [2024-07-26 22:45:11.991522] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:19.498 [2024-07-26 22:45:11.992525] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:19.498 [2024-07-26 22:45:11.993531] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:19.498 [2024-07-26 22:45:11.994536] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:19.498 [2024-07-26 22:45:11.995540] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:19.498 [2024-07-26 22:45:11.996547] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:19.498 [2024-07-26 22:45:11.996567] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff671e18000 00:15:19.498 [2024-07-26 22:45:11.997711] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:19.757 [2024-07-26 22:45:12.013795] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:19.757 [2024-07-26 22:45:12.013834] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:19.757 [2024-07-26 22:45:12.018688] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:19.757 [2024-07-26 22:45:12.018749] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:19.757 [2024-07-26 22:45:12.018844] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:19.757 [2024-07-26 22:45:12.018879] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:19.757 [2024-07-26 22:45:12.018890] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:19.757 [2024-07-26 22:45:12.019685] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:19.757 [2024-07-26 22:45:12.019710] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:19.757 [2024-07-26 22:45:12.019723] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:19.757 [2024-07-26 22:45:12.020692] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:19.757 [2024-07-26 22:45:12.020712] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:19.757 [2024-07-26 22:45:12.020725] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:19.757 [2024-07-26 22:45:12.021699] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:19.757 [2024-07-26 22:45:12.021717] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:19.757 [2024-07-26 22:45:12.022708] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:19.757 [2024-07-26 22:45:12.022727] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:19.757 [2024-07-26 22:45:12.022737] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:19.758 [2024-07-26 22:45:12.022749] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:19.758 [2024-07-26 22:45:12.022858] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:19.758 [2024-07-26 22:45:12.022867] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:19.758 [2024-07-26 22:45:12.022876] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:19.758 [2024-07-26 22:45:12.023728] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:19.758 [2024-07-26 22:45:12.024726] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:19.758 [2024-07-26 22:45:12.025731] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:19.758 [2024-07-26 22:45:12.026727] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:19.758 [2024-07-26 22:45:12.026846] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:19.758 [2024-07-26 22:45:12.027740] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:19.758 [2024-07-26 22:45:12.027758] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:19.758 [2024-07-26 22:45:12.027766] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:19.758 [2024-07-26 22:45:12.027790] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:19.758 [2024-07-26 22:45:12.027804] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:19.758 [2024-07-26 22:45:12.027837] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:19.758 [2024-07-26 22:45:12.027847] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:19.758 [2024-07-26 22:45:12.027870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:19.758 [2024-07-26 22:45:12.027944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:19.758 [2024-07-26 22:45:12.027970] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:19.758 [2024-07-26 22:45:12.027979] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:19.758 [2024-07-26 22:45:12.027987] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:19.758 [2024-07-26 22:45:12.027995] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:19.758 [2024-07-26 22:45:12.028003] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:19.758 [2024-07-26 22:45:12.028011] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:19.758 [2024-07-26 22:45:12.028019] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:19.758 [2024-07-26 22:45:12.028032] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:19.758 [2024-07-26 22:45:12.028069] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:19.758 [2024-07-26 22:45:12.028087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:19.758 [2024-07-26 22:45:12.028107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:19.758 [2024-07-26 22:45:12.028120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:19.758 [2024-07-26 22:45:12.028132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:19.758 [2024-07-26 22:45:12.028144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:19.758 [2024-07-26 22:45:12.028153] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:19.758 [2024-07-26 22:45:12.028169] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:19.758 [2024-07-26 22:45:12.028184] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:19.758 [2024-07-26 22:45:12.028196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:19.758 [2024-07-26 22:45:12.028207] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:19.758 [2024-07-26 22:45:12.028216] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:19.758 [2024-07-26 22:45:12.028227] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:19.758 [2024-07-26 22:45:12.028242] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:19.758 [2024-07-26 22:45:12.028256] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:19.758 [2024-07-26 22:45:12.028268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:19.758 [2024-07-26 22:45:12.028338] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:19.758 [2024-07-26 22:45:12.028369] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:19.758 [2024-07-26 22:45:12.028384] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:19.758 [2024-07-26 22:45:12.028392] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:19.758 [2024-07-26 22:45:12.028402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:19.758 [2024-07-26 22:45:12.028416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:19.758 [2024-07-26 22:45:12.028433] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:19.758 [2024-07-26 22:45:12.028454] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:19.758 [2024-07-26 22:45:12.028469] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:19.758 [2024-07-26 22:45:12.028480] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:19.758 [2024-07-26 22:45:12.028488] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:19.758 [2024-07-26 22:45:12.028498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:19.758 [2024-07-26 22:45:12.028518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:19.758 [2024-07-26 22:45:12.028541] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:19.758 [2024-07-26 22:45:12.028556] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:19.758 [2024-07-26 22:45:12.028568] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:19.758 [2024-07-26 22:45:12.028576] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:19.758 [2024-07-26 22:45:12.028585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:19.758 [2024-07-26 22:45:12.028596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:19.758 [2024-07-26 22:45:12.028611] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:19.758 [2024-07-26 22:45:12.028622] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:19.758 [2024-07-26 22:45:12.028636] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:19.758 [2024-07-26 22:45:12.028648] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:19.758 [2024-07-26 22:45:12.028656] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:19.758 [2024-07-26 22:45:12.028665] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:19.758 [2024-07-26 22:45:12.028673] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:19.758 [2024-07-26 22:45:12.028684] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:19.758 [2024-07-26 22:45:12.028718] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:19.758 [2024-07-26 22:45:12.028736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:19.758 [2024-07-26 22:45:12.028755] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:19.758 [2024-07-26 22:45:12.028767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:19.758 [2024-07-26 22:45:12.028782] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:19.758 [2024-07-26 22:45:12.028797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:19.758 [2024-07-26 22:45:12.028813] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:19.758 [2024-07-26 22:45:12.028824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:19.759 [2024-07-26 22:45:12.028842] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:19.759 [2024-07-26 22:45:12.028851] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:19.759 [2024-07-26 22:45:12.028857] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:19.759 [2024-07-26 22:45:12.028863] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:19.759 [2024-07-26 22:45:12.028873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:19.759 [2024-07-26 22:45:12.028883] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:19.759 [2024-07-26 22:45:12.028891] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:19.759 [2024-07-26 22:45:12.028900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:19.759 [2024-07-26 22:45:12.028910] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:19.759 [2024-07-26 22:45:12.028918] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:19.759 [2024-07-26 22:45:12.028926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:19.759 [2024-07-26 22:45:12.028938] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:19.759 [2024-07-26 22:45:12.028946] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:19.759 [2024-07-26 22:45:12.028955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:19.759 [2024-07-26 22:45:12.028966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:19.759 [2024-07-26 22:45:12.028985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:19.759 [2024-07-26 22:45:12.029000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:19.759 [2024-07-26 22:45:12.029014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:19.759 ===================================================== 00:15:19.759 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:19.759 ===================================================== 00:15:19.759 Controller Capabilities/Features 00:15:19.759 ================================ 00:15:19.759 Vendor ID: 4e58 00:15:19.759 Subsystem Vendor ID: 4e58 00:15:19.759 Serial Number: SPDK1 00:15:19.759 Model Number: SPDK bdev Controller 00:15:19.759 Firmware Version: 24.05.1 00:15:19.759 Recommended Arb Burst: 6 00:15:19.759 IEEE OUI Identifier: 8d 6b 50 00:15:19.759 Multi-path I/O 00:15:19.759 May have multiple subsystem ports: Yes 00:15:19.759 May have multiple controllers: Yes 00:15:19.759 Associated with SR-IOV VF: No 00:15:19.759 Max Data Transfer Size: 131072 00:15:19.759 Max Number of Namespaces: 32 00:15:19.759 Max Number of I/O Queues: 127 00:15:19.759 NVMe Specification Version (VS): 1.3 00:15:19.759 NVMe Specification Version (Identify): 1.3 00:15:19.759 Maximum Queue Entries: 256 00:15:19.759 Contiguous Queues Required: Yes 00:15:19.759 Arbitration Mechanisms Supported 00:15:19.759 Weighted Round Robin: Not Supported 00:15:19.759 Vendor Specific: Not Supported 00:15:19.759 Reset Timeout: 15000 ms 00:15:19.759 Doorbell Stride: 4 bytes 00:15:19.759 NVM Subsystem Reset: Not Supported 00:15:19.759 Command Sets Supported 00:15:19.759 NVM Command Set: Supported 00:15:19.759 Boot Partition: Not Supported 00:15:19.759 Memory Page Size Minimum: 4096 bytes 00:15:19.759 Memory Page Size Maximum: 4096 bytes 00:15:19.759 Persistent Memory Region: Not Supported 00:15:19.759 Optional Asynchronous Events Supported 00:15:19.759 Namespace Attribute Notices: Supported 00:15:19.759 Firmware Activation Notices: Not Supported 00:15:19.759 ANA Change Notices: Not Supported 00:15:19.759 PLE Aggregate Log Change Notices: Not Supported 00:15:19.759 LBA Status Info Alert Notices: Not Supported 00:15:19.759 EGE Aggregate Log Change Notices: Not Supported 00:15:19.759 Normal NVM Subsystem Shutdown event: Not Supported 00:15:19.759 Zone Descriptor Change Notices: Not Supported 00:15:19.759 Discovery Log Change Notices: Not Supported 00:15:19.759 Controller Attributes 00:15:19.759 128-bit Host Identifier: Supported 00:15:19.759 Non-Operational Permissive Mode: Not Supported 00:15:19.759 NVM Sets: Not Supported 00:15:19.759 Read Recovery Levels: Not Supported 00:15:19.759 Endurance Groups: Not Supported 00:15:19.759 Predictable Latency Mode: Not Supported 00:15:19.759 Traffic Based Keep ALive: Not Supported 00:15:19.759 Namespace Granularity: Not Supported 00:15:19.759 SQ Associations: Not Supported 00:15:19.759 UUID List: Not Supported 00:15:19.759 Multi-Domain Subsystem: Not Supported 00:15:19.759 Fixed Capacity Management: Not Supported 00:15:19.759 Variable Capacity Management: Not Supported 00:15:19.759 Delete Endurance Group: Not Supported 00:15:19.759 Delete NVM Set: Not Supported 00:15:19.759 Extended LBA Formats Supported: Not Supported 00:15:19.759 Flexible Data Placement Supported: Not Supported 00:15:19.759 00:15:19.759 Controller Memory Buffer Support 00:15:19.759 ================================ 00:15:19.759 Supported: No 00:15:19.759 00:15:19.759 Persistent Memory Region Support 00:15:19.759 ================================ 00:15:19.759 Supported: No 00:15:19.759 00:15:19.759 Admin Command Set Attributes 00:15:19.759 ============================ 00:15:19.759 Security Send/Receive: Not Supported 00:15:19.759 Format NVM: Not Supported 00:15:19.759 Firmware Activate/Download: Not Supported 00:15:19.759 Namespace Management: Not Supported 00:15:19.759 Device Self-Test: Not Supported 00:15:19.759 Directives: Not Supported 00:15:19.759 NVMe-MI: Not Supported 00:15:19.759 Virtualization Management: Not Supported 00:15:19.759 Doorbell Buffer Config: Not Supported 00:15:19.759 Get LBA Status Capability: Not Supported 00:15:19.759 Command & Feature Lockdown Capability: Not Supported 00:15:19.759 Abort Command Limit: 4 00:15:19.759 Async Event Request Limit: 4 00:15:19.759 Number of Firmware Slots: N/A 00:15:19.759 Firmware Slot 1 Read-Only: N/A 00:15:19.759 Firmware Activation Without Reset: N/A 00:15:19.759 Multiple Update Detection Support: N/A 00:15:19.759 Firmware Update Granularity: No Information Provided 00:15:19.759 Per-Namespace SMART Log: No 00:15:19.759 Asymmetric Namespace Access Log Page: Not Supported 00:15:19.759 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:19.759 Command Effects Log Page: Supported 00:15:19.759 Get Log Page Extended Data: Supported 00:15:19.759 Telemetry Log Pages: Not Supported 00:15:19.759 Persistent Event Log Pages: Not Supported 00:15:19.759 Supported Log Pages Log Page: May Support 00:15:19.759 Commands Supported & Effects Log Page: Not Supported 00:15:19.759 Feature Identifiers & Effects Log Page:May Support 00:15:19.759 NVMe-MI Commands & Effects Log Page: May Support 00:15:19.759 Data Area 4 for Telemetry Log: Not Supported 00:15:19.759 Error Log Page Entries Supported: 128 00:15:19.759 Keep Alive: Supported 00:15:19.759 Keep Alive Granularity: 10000 ms 00:15:19.759 00:15:19.759 NVM Command Set Attributes 00:15:19.759 ========================== 00:15:19.759 Submission Queue Entry Size 00:15:19.759 Max: 64 00:15:19.759 Min: 64 00:15:19.759 Completion Queue Entry Size 00:15:19.759 Max: 16 00:15:19.759 Min: 16 00:15:19.759 Number of Namespaces: 32 00:15:19.759 Compare Command: Supported 00:15:19.759 Write Uncorrectable Command: Not Supported 00:15:19.759 Dataset Management Command: Supported 00:15:19.759 Write Zeroes Command: Supported 00:15:19.759 Set Features Save Field: Not Supported 00:15:19.759 Reservations: Not Supported 00:15:19.759 Timestamp: Not Supported 00:15:19.759 Copy: Supported 00:15:19.759 Volatile Write Cache: Present 00:15:19.759 Atomic Write Unit (Normal): 1 00:15:19.759 Atomic Write Unit (PFail): 1 00:15:19.759 Atomic Compare & Write Unit: 1 00:15:19.759 Fused Compare & Write: Supported 00:15:19.759 Scatter-Gather List 00:15:19.759 SGL Command Set: Supported (Dword aligned) 00:15:19.759 SGL Keyed: Not Supported 00:15:19.759 SGL Bit Bucket Descriptor: Not Supported 00:15:19.759 SGL Metadata Pointer: Not Supported 00:15:19.759 Oversized SGL: Not Supported 00:15:19.759 SGL Metadata Address: Not Supported 00:15:19.759 SGL Offset: Not Supported 00:15:19.759 Transport SGL Data Block: Not Supported 00:15:19.759 Replay Protected Memory Block: Not Supported 00:15:19.759 00:15:19.759 Firmware Slot Information 00:15:19.759 ========================= 00:15:19.759 Active slot: 1 00:15:19.759 Slot 1 Firmware Revision: 24.05.1 00:15:19.759 00:15:19.759 00:15:19.759 Commands Supported and Effects 00:15:19.759 ============================== 00:15:19.759 Admin Commands 00:15:19.759 -------------- 00:15:19.759 Get Log Page (02h): Supported 00:15:19.760 Identify (06h): Supported 00:15:19.760 Abort (08h): Supported 00:15:19.760 Set Features (09h): Supported 00:15:19.760 Get Features (0Ah): Supported 00:15:19.760 Asynchronous Event Request (0Ch): Supported 00:15:19.760 Keep Alive (18h): Supported 00:15:19.760 I/O Commands 00:15:19.760 ------------ 00:15:19.760 Flush (00h): Supported LBA-Change 00:15:19.760 Write (01h): Supported LBA-Change 00:15:19.760 Read (02h): Supported 00:15:19.760 Compare (05h): Supported 00:15:19.760 Write Zeroes (08h): Supported LBA-Change 00:15:19.760 Dataset Management (09h): Supported LBA-Change 00:15:19.760 Copy (19h): Supported LBA-Change 00:15:19.760 Unknown (79h): Supported LBA-Change 00:15:19.760 Unknown (7Ah): Supported 00:15:19.760 00:15:19.760 Error Log 00:15:19.760 ========= 00:15:19.760 00:15:19.760 Arbitration 00:15:19.760 =========== 00:15:19.760 Arbitration Burst: 1 00:15:19.760 00:15:19.760 Power Management 00:15:19.760 ================ 00:15:19.760 Number of Power States: 1 00:15:19.760 Current Power State: Power State #0 00:15:19.760 Power State #0: 00:15:19.760 Max Power: 0.00 W 00:15:19.760 Non-Operational State: Operational 00:15:19.760 Entry Latency: Not Reported 00:15:19.760 Exit Latency: Not Reported 00:15:19.760 Relative Read Throughput: 0 00:15:19.760 Relative Read Latency: 0 00:15:19.760 Relative Write Throughput: 0 00:15:19.760 Relative Write Latency: 0 00:15:19.760 Idle Power: Not Reported 00:15:19.760 Active Power: Not Reported 00:15:19.760 Non-Operational Permissive Mode: Not Supported 00:15:19.760 00:15:19.760 Health Information 00:15:19.760 ================== 00:15:19.760 Critical Warnings: 00:15:19.760 Available Spare Space: OK 00:15:19.760 Temperature: OK 00:15:19.760 Device Reliability: OK 00:15:19.760 Read Only: No 00:15:19.760 Volatile Memory Backup: OK 00:15:19.760 Current Temperature: 0 Kelvin[2024-07-26 22:45:12.029197] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:19.760 [2024-07-26 22:45:12.029215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:19.760 [2024-07-26 22:45:12.029256] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:19.760 [2024-07-26 22:45:12.029273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.760 [2024-07-26 22:45:12.029284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.760 [2024-07-26 22:45:12.029294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.760 [2024-07-26 22:45:12.029304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.760 [2024-07-26 22:45:12.033070] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:19.760 [2024-07-26 22:45:12.033093] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:19.760 [2024-07-26 22:45:12.033766] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:19.760 [2024-07-26 22:45:12.033837] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:19.760 [2024-07-26 22:45:12.033851] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:19.760 [2024-07-26 22:45:12.034778] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:19.760 [2024-07-26 22:45:12.034801] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:19.760 [2024-07-26 22:45:12.034856] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:19.760 [2024-07-26 22:45:12.036816] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:19.760 (-273 Celsius) 00:15:19.760 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:19.760 Available Spare: 0% 00:15:19.760 Available Spare Threshold: 0% 00:15:19.760 Life Percentage Used: 0% 00:15:19.760 Data Units Read: 0 00:15:19.760 Data Units Written: 0 00:15:19.760 Host Read Commands: 0 00:15:19.760 Host Write Commands: 0 00:15:19.760 Controller Busy Time: 0 minutes 00:15:19.760 Power Cycles: 0 00:15:19.760 Power On Hours: 0 hours 00:15:19.760 Unsafe Shutdowns: 0 00:15:19.760 Unrecoverable Media Errors: 0 00:15:19.760 Lifetime Error Log Entries: 0 00:15:19.760 Warning Temperature Time: 0 minutes 00:15:19.760 Critical Temperature Time: 0 minutes 00:15:19.760 00:15:19.760 Number of Queues 00:15:19.760 ================ 00:15:19.760 Number of I/O Submission Queues: 127 00:15:19.760 Number of I/O Completion Queues: 127 00:15:19.760 00:15:19.760 Active Namespaces 00:15:19.760 ================= 00:15:19.760 Namespace ID:1 00:15:19.760 Error Recovery Timeout: Unlimited 00:15:19.760 Command Set Identifier: NVM (00h) 00:15:19.760 Deallocate: Supported 00:15:19.760 Deallocated/Unwritten Error: Not Supported 00:15:19.760 Deallocated Read Value: Unknown 00:15:19.760 Deallocate in Write Zeroes: Not Supported 00:15:19.760 Deallocated Guard Field: 0xFFFF 00:15:19.760 Flush: Supported 00:15:19.760 Reservation: Supported 00:15:19.760 Namespace Sharing Capabilities: Multiple Controllers 00:15:19.760 Size (in LBAs): 131072 (0GiB) 00:15:19.760 Capacity (in LBAs): 131072 (0GiB) 00:15:19.760 Utilization (in LBAs): 131072 (0GiB) 00:15:19.760 NGUID: 43CE2521C5804A8DA3E614D9BD110D0D 00:15:19.760 UUID: 43ce2521-c580-4a8d-a3e6-14d9bd110d0d 00:15:19.760 Thin Provisioning: Not Supported 00:15:19.760 Per-NS Atomic Units: Yes 00:15:19.760 Atomic Boundary Size (Normal): 0 00:15:19.760 Atomic Boundary Size (PFail): 0 00:15:19.760 Atomic Boundary Offset: 0 00:15:19.760 Maximum Single Source Range Length: 65535 00:15:19.760 Maximum Copy Length: 65535 00:15:19.760 Maximum Source Range Count: 1 00:15:19.760 NGUID/EUI64 Never Reused: No 00:15:19.760 Namespace Write Protected: No 00:15:19.760 Number of LBA Formats: 1 00:15:19.760 Current LBA Format: LBA Format #00 00:15:19.760 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:19.760 00:15:19.760 22:45:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:19.760 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.018 [2024-07-26 22:45:12.260480] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:25.278 Initializing NVMe Controllers 00:15:25.278 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:25.278 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:25.278 Initialization complete. Launching workers. 00:15:25.278 ======================================================== 00:15:25.278 Latency(us) 00:15:25.278 Device Information : IOPS MiB/s Average min max 00:15:25.278 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 35567.00 138.93 3599.77 1139.79 8506.76 00:15:25.278 ======================================================== 00:15:25.278 Total : 35567.00 138.93 3599.77 1139.79 8506.76 00:15:25.278 00:15:25.278 [2024-07-26 22:45:17.282375] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:25.278 22:45:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:25.278 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.278 [2024-07-26 22:45:17.512449] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:30.539 Initializing NVMe Controllers 00:15:30.539 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:30.539 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:30.539 Initialization complete. Launching workers. 00:15:30.539 ======================================================== 00:15:30.539 Latency(us) 00:15:30.539 Device Information : IOPS MiB/s Average min max 00:15:30.539 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7984.21 4988.75 12030.41 00:15:30.539 ======================================================== 00:15:30.539 Total : 16051.20 62.70 7984.21 4988.75 12030.41 00:15:30.539 00:15:30.539 [2024-07-26 22:45:22.548471] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:30.539 22:45:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:30.539 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.539 [2024-07-26 22:45:22.765568] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:35.827 [2024-07-26 22:45:27.834374] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:35.827 Initializing NVMe Controllers 00:15:35.827 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:35.827 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:35.827 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:35.827 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:35.827 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:35.827 Initialization complete. Launching workers. 00:15:35.827 Starting thread on core 2 00:15:35.827 Starting thread on core 3 00:15:35.827 Starting thread on core 1 00:15:35.827 22:45:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:35.827 EAL: No free 2048 kB hugepages reported on node 1 00:15:35.827 [2024-07-26 22:45:28.131589] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:39.107 [2024-07-26 22:45:31.194561] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:39.107 Initializing NVMe Controllers 00:15:39.107 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:39.107 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:39.107 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:39.107 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:39.107 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:39.107 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:39.107 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:39.107 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:39.107 Initialization complete. Launching workers. 00:15:39.107 Starting thread on core 1 with urgent priority queue 00:15:39.107 Starting thread on core 2 with urgent priority queue 00:15:39.107 Starting thread on core 3 with urgent priority queue 00:15:39.107 Starting thread on core 0 with urgent priority queue 00:15:39.107 SPDK bdev Controller (SPDK1 ) core 0: 2328.67 IO/s 42.94 secs/100000 ios 00:15:39.107 SPDK bdev Controller (SPDK1 ) core 1: 2445.67 IO/s 40.89 secs/100000 ios 00:15:39.107 SPDK bdev Controller (SPDK1 ) core 2: 2342.00 IO/s 42.70 secs/100000 ios 00:15:39.107 SPDK bdev Controller (SPDK1 ) core 3: 2504.33 IO/s 39.93 secs/100000 ios 00:15:39.107 ======================================================== 00:15:39.107 00:15:39.107 22:45:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:39.107 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.107 [2024-07-26 22:45:31.493646] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:39.107 Initializing NVMe Controllers 00:15:39.107 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:39.107 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:39.107 Namespace ID: 1 size: 0GB 00:15:39.107 Initialization complete. 00:15:39.107 INFO: using host memory buffer for IO 00:15:39.107 Hello world! 00:15:39.107 [2024-07-26 22:45:31.531261] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:39.107 22:45:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:39.364 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.364 [2024-07-26 22:45:31.818591] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:40.736 Initializing NVMe Controllers 00:15:40.736 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:40.736 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:40.736 Initialization complete. Launching workers. 00:15:40.736 submit (in ns) avg, min, max = 7523.1, 3505.6, 4015116.7 00:15:40.736 complete (in ns) avg, min, max = 24151.4, 2065.6, 4014622.2 00:15:40.736 00:15:40.736 Submit histogram 00:15:40.736 ================ 00:15:40.736 Range in us Cumulative Count 00:15:40.736 3.484 - 3.508: 0.0224% ( 3) 00:15:40.736 3.508 - 3.532: 0.6632% ( 86) 00:15:40.736 3.532 - 3.556: 1.6171% ( 128) 00:15:40.736 3.556 - 3.579: 5.4550% ( 515) 00:15:40.736 3.579 - 3.603: 11.2303% ( 775) 00:15:40.736 3.603 - 3.627: 19.8897% ( 1162) 00:15:40.736 3.627 - 3.650: 28.4224% ( 1145) 00:15:40.736 3.650 - 3.674: 37.3947% ( 1204) 00:15:40.736 3.674 - 3.698: 45.0481% ( 1027) 00:15:40.736 3.698 - 3.721: 51.4718% ( 862) 00:15:40.736 3.721 - 3.745: 56.1368% ( 626) 00:15:40.736 3.745 - 3.769: 59.4903% ( 450) 00:15:40.736 3.769 - 3.793: 62.7170% ( 433) 00:15:40.736 3.793 - 3.816: 65.8618% ( 422) 00:15:40.736 3.816 - 3.840: 69.7593% ( 523) 00:15:40.736 3.840 - 3.864: 73.9921% ( 568) 00:15:40.736 3.864 - 3.887: 77.9194% ( 527) 00:15:40.736 3.887 - 3.911: 81.8243% ( 524) 00:15:40.736 3.911 - 3.935: 85.3268% ( 470) 00:15:40.736 3.935 - 3.959: 87.3910% ( 277) 00:15:40.736 3.959 - 3.982: 88.9709% ( 212) 00:15:40.736 3.982 - 4.006: 90.2750% ( 175) 00:15:40.736 4.006 - 4.030: 91.2661% ( 133) 00:15:40.736 4.030 - 4.053: 92.3690% ( 148) 00:15:40.736 4.053 - 4.077: 93.4719% ( 148) 00:15:40.736 4.077 - 4.101: 94.2768% ( 108) 00:15:40.736 4.101 - 4.124: 95.0891% ( 109) 00:15:40.736 4.124 - 4.148: 95.6182% ( 71) 00:15:40.736 4.148 - 4.172: 96.1249% ( 68) 00:15:40.736 4.172 - 4.196: 96.3932% ( 36) 00:15:40.736 4.196 - 4.219: 96.5944% ( 27) 00:15:40.736 4.219 - 4.243: 96.8105% ( 29) 00:15:40.736 4.243 - 4.267: 96.9744% ( 22) 00:15:40.736 4.267 - 4.290: 97.1011% ( 17) 00:15:40.736 4.290 - 4.314: 97.1980% ( 13) 00:15:40.736 4.314 - 4.338: 97.2725% ( 10) 00:15:40.736 4.338 - 4.361: 97.3694% ( 13) 00:15:40.736 4.361 - 4.385: 97.4365% ( 9) 00:15:40.736 4.385 - 4.409: 97.4961% ( 8) 00:15:40.736 4.409 - 4.433: 97.5706% ( 10) 00:15:40.736 4.433 - 4.456: 97.5855% ( 2) 00:15:40.736 4.456 - 4.480: 97.6153% ( 4) 00:15:40.736 4.480 - 4.504: 97.6228% ( 1) 00:15:40.736 4.504 - 4.527: 97.6302% ( 1) 00:15:40.736 4.527 - 4.551: 97.6377% ( 1) 00:15:40.736 4.551 - 4.575: 97.6451% ( 1) 00:15:40.736 4.575 - 4.599: 97.6675% ( 3) 00:15:40.736 4.646 - 4.670: 97.6749% ( 1) 00:15:40.736 4.670 - 4.693: 97.6898% ( 2) 00:15:40.736 4.693 - 4.717: 97.7197% ( 4) 00:15:40.736 4.717 - 4.741: 97.7569% ( 5) 00:15:40.736 4.741 - 4.764: 97.7718% ( 2) 00:15:40.736 4.764 - 4.788: 97.8016% ( 4) 00:15:40.736 4.788 - 4.812: 97.8612% ( 8) 00:15:40.736 4.812 - 4.836: 97.8985% ( 5) 00:15:40.736 4.836 - 4.859: 97.9507% ( 7) 00:15:40.736 4.859 - 4.883: 98.0028% ( 7) 00:15:40.736 4.883 - 4.907: 98.0550% ( 7) 00:15:40.736 4.907 - 4.930: 98.0624% ( 1) 00:15:40.736 4.930 - 4.954: 98.0774% ( 2) 00:15:40.736 4.954 - 4.978: 98.1072% ( 4) 00:15:40.736 4.978 - 5.001: 98.1370% ( 4) 00:15:40.736 5.001 - 5.025: 98.2040% ( 9) 00:15:40.736 5.025 - 5.049: 98.2264% ( 3) 00:15:40.736 5.049 - 5.073: 98.2637% ( 5) 00:15:40.736 5.073 - 5.096: 98.3084% ( 6) 00:15:40.736 5.096 - 5.120: 98.3456% ( 5) 00:15:40.736 5.144 - 5.167: 98.3531% ( 1) 00:15:40.736 5.167 - 5.191: 98.3680% ( 2) 00:15:40.736 5.262 - 5.286: 98.3754% ( 1) 00:15:40.736 5.310 - 5.333: 98.3903% ( 2) 00:15:40.736 5.333 - 5.357: 98.3978% ( 1) 00:15:40.736 5.428 - 5.452: 98.4052% ( 1) 00:15:40.736 5.523 - 5.547: 98.4127% ( 1) 00:15:40.736 5.760 - 5.784: 98.4202% ( 1) 00:15:40.736 5.997 - 6.021: 98.4276% ( 1) 00:15:40.736 6.305 - 6.353: 98.4351% ( 1) 00:15:40.736 6.590 - 6.637: 98.4425% ( 1) 00:15:40.736 6.637 - 6.684: 98.4500% ( 1) 00:15:40.736 6.779 - 6.827: 98.4574% ( 1) 00:15:40.736 6.827 - 6.874: 98.4649% ( 1) 00:15:40.736 6.874 - 6.921: 98.4723% ( 1) 00:15:40.736 6.921 - 6.969: 98.4798% ( 1) 00:15:40.736 6.969 - 7.016: 98.4872% ( 1) 00:15:40.736 7.064 - 7.111: 98.4947% ( 1) 00:15:40.736 7.159 - 7.206: 98.5096% ( 2) 00:15:40.736 7.206 - 7.253: 98.5170% ( 1) 00:15:40.736 7.396 - 7.443: 98.5319% ( 2) 00:15:40.736 7.443 - 7.490: 98.5468% ( 2) 00:15:40.736 7.490 - 7.538: 98.5543% ( 1) 00:15:40.736 7.538 - 7.585: 98.5692% ( 2) 00:15:40.736 7.585 - 7.633: 98.5766% ( 1) 00:15:40.736 7.727 - 7.775: 98.5841% ( 1) 00:15:40.736 7.775 - 7.822: 98.5915% ( 1) 00:15:40.736 7.870 - 7.917: 98.5990% ( 1) 00:15:40.736 7.917 - 7.964: 98.6065% ( 1) 00:15:40.736 8.012 - 8.059: 98.6214% ( 2) 00:15:40.736 8.059 - 8.107: 98.6363% ( 2) 00:15:40.736 8.154 - 8.201: 98.6512% ( 2) 00:15:40.736 8.249 - 8.296: 98.6586% ( 1) 00:15:40.736 8.296 - 8.344: 98.6735% ( 2) 00:15:40.736 8.628 - 8.676: 98.6810% ( 1) 00:15:40.736 8.676 - 8.723: 98.6884% ( 1) 00:15:40.736 8.723 - 8.770: 98.6959% ( 1) 00:15:40.736 8.913 - 8.960: 98.7033% ( 1) 00:15:40.736 9.007 - 9.055: 98.7108% ( 1) 00:15:40.736 9.529 - 9.576: 98.7182% ( 1) 00:15:40.736 9.671 - 9.719: 98.7257% ( 1) 00:15:40.736 9.719 - 9.766: 98.7331% ( 1) 00:15:40.736 9.766 - 9.813: 98.7406% ( 1) 00:15:40.736 9.956 - 10.003: 98.7480% ( 1) 00:15:40.736 10.098 - 10.145: 98.7555% ( 1) 00:15:40.736 10.145 - 10.193: 98.7629% ( 1) 00:15:40.736 10.951 - 10.999: 98.7704% ( 1) 00:15:40.736 10.999 - 11.046: 98.7779% ( 1) 00:15:40.736 11.378 - 11.425: 98.7853% ( 1) 00:15:40.736 11.425 - 11.473: 98.7928% ( 1) 00:15:40.737 11.520 - 11.567: 98.8002% ( 1) 00:15:40.737 11.757 - 11.804: 98.8151% ( 2) 00:15:40.737 12.041 - 12.089: 98.8226% ( 1) 00:15:40.737 12.136 - 12.231: 98.8300% ( 1) 00:15:40.737 12.231 - 12.326: 98.8375% ( 1) 00:15:40.737 12.610 - 12.705: 98.8449% ( 1) 00:15:40.737 12.705 - 12.800: 98.8524% ( 1) 00:15:40.737 13.084 - 13.179: 98.8598% ( 1) 00:15:40.737 13.179 - 13.274: 98.8673% ( 1) 00:15:40.737 13.274 - 13.369: 98.8747% ( 1) 00:15:40.737 13.559 - 13.653: 98.8822% ( 1) 00:15:40.737 13.653 - 13.748: 98.8896% ( 1) 00:15:40.737 13.748 - 13.843: 98.9045% ( 2) 00:15:40.737 13.938 - 14.033: 98.9120% ( 1) 00:15:40.737 14.222 - 14.317: 98.9194% ( 1) 00:15:40.737 14.317 - 14.412: 98.9269% ( 1) 00:15:40.737 14.791 - 14.886: 98.9343% ( 1) 00:15:40.737 16.972 - 17.067: 98.9418% ( 1) 00:15:40.737 17.067 - 17.161: 98.9493% ( 1) 00:15:40.737 17.161 - 17.256: 98.9642% ( 2) 00:15:40.737 17.256 - 17.351: 98.9865% ( 3) 00:15:40.737 17.351 - 17.446: 98.9940% ( 1) 00:15:40.737 17.446 - 17.541: 99.0163% ( 3) 00:15:40.737 17.541 - 17.636: 99.0238% ( 1) 00:15:40.737 17.636 - 17.730: 99.0610% ( 5) 00:15:40.737 17.730 - 17.825: 99.0834% ( 3) 00:15:40.737 17.825 - 17.920: 99.1132% ( 4) 00:15:40.737 17.920 - 18.015: 99.1579% ( 6) 00:15:40.737 18.015 - 18.110: 99.2324% ( 10) 00:15:40.737 18.110 - 18.204: 99.2995% ( 9) 00:15:40.737 18.204 - 18.299: 99.3740% ( 10) 00:15:40.737 18.299 - 18.394: 99.4411% ( 9) 00:15:40.737 18.394 - 18.489: 99.4858% ( 6) 00:15:40.737 18.489 - 18.584: 99.5603% ( 10) 00:15:40.737 18.584 - 18.679: 99.6125% ( 7) 00:15:40.737 18.679 - 18.773: 99.6647% ( 7) 00:15:40.737 18.773 - 18.868: 99.6796% ( 2) 00:15:40.737 18.868 - 18.963: 99.7168% ( 5) 00:15:40.737 19.153 - 19.247: 99.7392% ( 3) 00:15:40.737 19.247 - 19.342: 99.7466% ( 1) 00:15:40.737 19.342 - 19.437: 99.7541% ( 1) 00:15:40.737 19.437 - 19.532: 99.7690% ( 2) 00:15:40.737 19.627 - 19.721: 99.7839% ( 2) 00:15:40.737 19.721 - 19.816: 99.7913% ( 1) 00:15:40.737 19.816 - 19.911: 99.8286% ( 5) 00:15:40.737 20.101 - 20.196: 99.8361% ( 1) 00:15:40.737 21.428 - 21.523: 99.8435% ( 1) 00:15:40.737 23.040 - 23.135: 99.8510% ( 1) 00:15:40.737 24.273 - 24.462: 99.8584% ( 1) 00:15:40.737 24.841 - 25.031: 99.8659% ( 1) 00:15:40.737 25.600 - 25.790: 99.8733% ( 1) 00:15:40.737 25.790 - 25.979: 99.8882% ( 2) 00:15:40.737 26.169 - 26.359: 99.8957% ( 1) 00:15:40.737 31.858 - 32.047: 99.9031% ( 1) 00:15:40.737 35.461 - 35.650: 99.9106% ( 1) 00:15:40.737 3980.705 - 4004.978: 99.9776% ( 9) 00:15:40.737 4004.978 - 4029.250: 100.0000% ( 3) 00:15:40.737 00:15:40.737 Complete histogram 00:15:40.737 ================== 00:15:40.737 Range in us Cumulative Count 00:15:40.737 2.062 - 2.074: 5.7679% ( 774) 00:15:40.737 2.074 - 2.086: 34.4139% ( 3844) 00:15:40.737 2.086 - 2.098: 38.0282% ( 485) 00:15:40.737 2.098 - 2.110: 46.6130% ( 1152) 00:15:40.737 2.110 - 2.121: 59.3487% ( 1709) 00:15:40.737 2.121 - 2.133: 61.0105% ( 223) 00:15:40.737 2.133 - 2.145: 67.5982% ( 884) 00:15:40.737 2.145 - 2.157: 73.2692% ( 761) 00:15:40.737 2.157 - 2.169: 74.2529% ( 132) 00:15:40.737 2.169 - 2.181: 77.7107% ( 464) 00:15:40.737 2.181 - 2.193: 80.5202% ( 377) 00:15:40.737 2.193 - 2.204: 81.1387% ( 83) 00:15:40.737 2.204 - 2.216: 83.9407% ( 376) 00:15:40.737 2.216 - 2.228: 87.7785% ( 515) 00:15:40.737 2.228 - 2.240: 89.7235% ( 261) 00:15:40.737 2.240 - 2.252: 91.6164% ( 254) 00:15:40.737 2.252 - 2.264: 93.3825% ( 237) 00:15:40.737 2.264 - 2.276: 93.6508% ( 36) 00:15:40.737 2.276 - 2.287: 94.1128% ( 62) 00:15:40.737 2.287 - 2.299: 94.5450% ( 58) 00:15:40.737 2.299 - 2.311: 95.0816% ( 72) 00:15:40.737 2.311 - 2.323: 95.3350% ( 34) 00:15:40.737 2.323 - 2.335: 95.3871% ( 7) 00:15:40.737 2.335 - 2.347: 95.4542% ( 9) 00:15:40.737 2.347 - 2.359: 95.5809% ( 17) 00:15:40.737 2.359 - 2.370: 95.9237% ( 46) 00:15:40.737 2.370 - 2.382: 96.4751% ( 74) 00:15:40.737 2.382 - 2.394: 97.0639% ( 79) 00:15:40.737 2.394 - 2.406: 97.3694% ( 41) 00:15:40.737 2.406 - 2.418: 97.5557% ( 25) 00:15:40.737 2.418 - 2.430: 97.6600% ( 14) 00:15:40.737 2.430 - 2.441: 97.8091% ( 20) 00:15:40.737 2.441 - 2.453: 97.9581% ( 20) 00:15:40.737 2.453 - 2.465: 98.0401% ( 11) 00:15:40.737 2.465 - 2.477: 98.1072% ( 9) 00:15:40.737 2.477 - 2.489: 98.1519% ( 6) 00:15:40.737 2.489 - 2.501: 98.1817% ( 4) 00:15:40.737 2.501 - 2.513: 98.2264% ( 6) 00:15:40.737 2.513 - 2.524: 98.2488% ( 3) 00:15:40.737 2.524 - 2.536: 98.2786% ( 4) 00:15:40.737 2.536 - 2.548: 98.2860% ( 1) 00:15:40.737 2.560 - 2.572: 98.3084% ( 3) 00:15:40.737 2.572 - 2.584: 98.3456% ( 5) 00:15:40.737 2.584 - 2.596: 98.3531% ( 1) 00:15:40.737 2.596 - 2.607: 98.3605% ( 1) 00:15:40.737 2.619 - 2.631: 98.3680% ( 1) 00:15:40.737 2.631 - 2.643: 98.3754% ( 1) 00:15:40.737 2.655 - 2.667: 98.3829% ( 1) 00:15:40.737 2.679 - 2.690: 98.4127% ( 4) 00:15:40.737 2.690 - 2.702: 98.4276% ( 2) 00:15:40.737 2.714 - 2.726: 98.4351% ( 1) 00:15:40.737 2.726 - 2.738: 98.4425% ( 1) 00:15:40.737 2.738 - 2.750: 98.4500% ( 1) 00:15:40.737 2.916 - 2.927: 98.4574% ( 1) 00:15:40.737 3.176 - 3.200: 98.4649% ( 1) 00:15:40.737 3.224 - 3.247: 98.4723% ( 1) 00:15:40.737 3.271 - 3.295: 98.4798% ( 1) 00:15:40.737 3.295 - 3.319: 98.4872% ( 1) 00:15:40.737 3.319 - 3.342: 98.5021% ( 2) 00:15:40.737 3.342 - 3.366: 98.5096% ( 1) 00:15:40.737 3.366 - 3.390: 98.5170% ( 1) 00:15:40.737 3.390 - 3.413: 98.5394% ( 3) 00:15:40.737 3.437 - 3.461: 98.5468% ( 1) 00:15:40.737 3.508 - 3.532: 98.5543% ( 1) 00:15:40.737 3.556 - 3.579: 98.5617% ( 1) 00:15:40.737 3.579 - 3.603: 98.5766% ( 2) 00:15:40.737 3.603 - 3.627: 98.5841% ( 1) 00:15:40.737 3.627 - 3.650: 98.5915% ( 1) 00:15:40.737 3.650 - 3.674: 98.5990% ( 1) 00:15:40.737 3.674 - 3.698: 98.6214% ( 3) 00:15:40.737 3.745 - 3.769: 98.6288% ( 1) 00:15:40.737 3.840 - 3.864: 98.6437% ( 2) 00:15:40.737 3.864 - 3.887: 98.6512% ( 1) 00:15:40.737 4.006 - 4.030: 98.6586% ( 1) 00:15:40.737 4.290 - 4.314: 98.6661% ( 1) 00:15:40.737 4.954 - 4.978: 98.6735% ( 1) 00:15:40.737 4.978 - 5.001: 98.6810% ( 1) 00:15:40.737 5.073 - 5.096: 9[2024-07-26 22:45:32.837729] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:40.737 8.6884% ( 1) 00:15:40.737 5.096 - 5.120: 98.6959% ( 1) 00:15:40.737 5.167 - 5.191: 98.7033% ( 1) 00:15:40.737 5.215 - 5.239: 98.7108% ( 1) 00:15:40.737 5.547 - 5.570: 98.7182% ( 1) 00:15:40.737 5.594 - 5.618: 98.7257% ( 1) 00:15:40.737 5.713 - 5.736: 98.7331% ( 1) 00:15:40.737 5.855 - 5.879: 98.7406% ( 1) 00:15:40.737 5.926 - 5.950: 98.7480% ( 1) 00:15:40.737 5.973 - 5.997: 98.7555% ( 1) 00:15:40.737 5.997 - 6.021: 98.7629% ( 1) 00:15:40.737 6.021 - 6.044: 98.7704% ( 1) 00:15:40.737 6.044 - 6.068: 98.7779% ( 1) 00:15:40.737 6.068 - 6.116: 98.7853% ( 1) 00:15:40.737 6.116 - 6.163: 98.7928% ( 1) 00:15:40.737 6.305 - 6.353: 98.8002% ( 1) 00:15:40.737 6.684 - 6.732: 98.8077% ( 1) 00:15:40.737 7.253 - 7.301: 98.8151% ( 1) 00:15:40.737 15.455 - 15.550: 98.8226% ( 1) 00:15:40.737 15.550 - 15.644: 98.8375% ( 2) 00:15:40.737 15.644 - 15.739: 98.8524% ( 2) 00:15:40.737 15.739 - 15.834: 98.8673% ( 2) 00:15:40.737 15.834 - 15.929: 98.8822% ( 2) 00:15:40.737 15.929 - 16.024: 98.9045% ( 3) 00:15:40.737 16.024 - 16.119: 98.9269% ( 3) 00:15:40.737 16.119 - 16.213: 98.9493% ( 3) 00:15:40.737 16.213 - 16.308: 98.9791% ( 4) 00:15:40.737 16.308 - 16.403: 99.0014% ( 3) 00:15:40.737 16.403 - 16.498: 99.0312% ( 4) 00:15:40.737 16.498 - 16.593: 99.1132% ( 11) 00:15:40.737 16.593 - 16.687: 99.1728% ( 8) 00:15:40.737 16.687 - 16.782: 99.2026% ( 4) 00:15:40.737 16.782 - 16.877: 99.2250% ( 3) 00:15:40.738 16.877 - 16.972: 99.2548% ( 4) 00:15:40.738 16.972 - 17.067: 99.2771% ( 3) 00:15:40.738 17.067 - 17.161: 99.3144% ( 5) 00:15:40.738 17.161 - 17.256: 99.3219% ( 1) 00:15:40.738 17.256 - 17.351: 99.3442% ( 3) 00:15:40.738 17.446 - 17.541: 99.3517% ( 1) 00:15:40.738 17.541 - 17.636: 99.3666% ( 2) 00:15:40.738 17.636 - 17.730: 99.3815% ( 2) 00:15:40.738 17.730 - 17.825: 99.3889% ( 1) 00:15:40.738 17.825 - 17.920: 99.4038% ( 2) 00:15:40.738 17.920 - 18.015: 99.4113% ( 1) 00:15:40.738 18.015 - 18.110: 99.4187% ( 1) 00:15:40.738 18.204 - 18.299: 99.4336% ( 2) 00:15:40.738 18.489 - 18.584: 99.4411% ( 1) 00:15:40.738 23.893 - 23.988: 99.4485% ( 1) 00:15:40.738 3009.801 - 3021.938: 99.4560% ( 1) 00:15:40.738 3058.347 - 3070.483: 99.4634% ( 1) 00:15:40.738 3980.705 - 4004.978: 99.8957% ( 58) 00:15:40.738 4004.978 - 4029.250: 100.0000% ( 14) 00:15:40.738 00:15:40.738 22:45:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:40.738 22:45:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:40.738 22:45:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:40.738 22:45:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:40.738 22:45:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:40.738 [ 00:15:40.738 { 00:15:40.738 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:40.738 "subtype": "Discovery", 00:15:40.738 "listen_addresses": [], 00:15:40.738 "allow_any_host": true, 00:15:40.738 "hosts": [] 00:15:40.738 }, 00:15:40.738 { 00:15:40.738 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:40.738 "subtype": "NVMe", 00:15:40.738 "listen_addresses": [ 00:15:40.738 { 00:15:40.738 "trtype": "VFIOUSER", 00:15:40.738 "adrfam": "IPv4", 00:15:40.738 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:40.738 "trsvcid": "0" 00:15:40.738 } 00:15:40.738 ], 00:15:40.738 "allow_any_host": true, 00:15:40.738 "hosts": [], 00:15:40.738 "serial_number": "SPDK1", 00:15:40.738 "model_number": "SPDK bdev Controller", 00:15:40.738 "max_namespaces": 32, 00:15:40.738 "min_cntlid": 1, 00:15:40.738 "max_cntlid": 65519, 00:15:40.738 "namespaces": [ 00:15:40.738 { 00:15:40.738 "nsid": 1, 00:15:40.738 "bdev_name": "Malloc1", 00:15:40.738 "name": "Malloc1", 00:15:40.738 "nguid": "43CE2521C5804A8DA3E614D9BD110D0D", 00:15:40.738 "uuid": "43ce2521-c580-4a8d-a3e6-14d9bd110d0d" 00:15:40.738 } 00:15:40.738 ] 00:15:40.738 }, 00:15:40.738 { 00:15:40.738 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:40.738 "subtype": "NVMe", 00:15:40.738 "listen_addresses": [ 00:15:40.738 { 00:15:40.738 "trtype": "VFIOUSER", 00:15:40.738 "adrfam": "IPv4", 00:15:40.738 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:40.738 "trsvcid": "0" 00:15:40.738 } 00:15:40.738 ], 00:15:40.738 "allow_any_host": true, 00:15:40.738 "hosts": [], 00:15:40.738 "serial_number": "SPDK2", 00:15:40.738 "model_number": "SPDK bdev Controller", 00:15:40.738 "max_namespaces": 32, 00:15:40.738 "min_cntlid": 1, 00:15:40.738 "max_cntlid": 65519, 00:15:40.738 "namespaces": [ 00:15:40.738 { 00:15:40.738 "nsid": 1, 00:15:40.738 "bdev_name": "Malloc2", 00:15:40.738 "name": "Malloc2", 00:15:40.738 "nguid": "641C0B091884444CB1F70D83492DCAFF", 00:15:40.738 "uuid": "641c0b09-1884-444c-b1f7-0d83492dcaff" 00:15:40.738 } 00:15:40.738 ] 00:15:40.738 } 00:15:40.738 ] 00:15:40.738 22:45:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:40.738 22:45:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3506009 00:15:40.738 22:45:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:40.738 22:45:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:40.738 22:45:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:15:40.738 22:45:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:40.738 22:45:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:40.738 22:45:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:15:40.738 22:45:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:40.738 22:45:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:40.738 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.995 [2024-07-26 22:45:33.330008] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:40.995 Malloc3 00:15:40.995 22:45:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:41.252 [2024-07-26 22:45:33.712790] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:41.252 22:45:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:41.509 Asynchronous Event Request test 00:15:41.509 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:41.509 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:41.509 Registering asynchronous event callbacks... 00:15:41.509 Starting namespace attribute notice tests for all controllers... 00:15:41.509 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:41.509 aer_cb - Changed Namespace 00:15:41.509 Cleaning up... 00:15:41.509 [ 00:15:41.509 { 00:15:41.509 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:41.509 "subtype": "Discovery", 00:15:41.509 "listen_addresses": [], 00:15:41.509 "allow_any_host": true, 00:15:41.509 "hosts": [] 00:15:41.509 }, 00:15:41.509 { 00:15:41.509 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:41.509 "subtype": "NVMe", 00:15:41.509 "listen_addresses": [ 00:15:41.509 { 00:15:41.509 "trtype": "VFIOUSER", 00:15:41.509 "adrfam": "IPv4", 00:15:41.509 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:41.509 "trsvcid": "0" 00:15:41.509 } 00:15:41.509 ], 00:15:41.509 "allow_any_host": true, 00:15:41.509 "hosts": [], 00:15:41.509 "serial_number": "SPDK1", 00:15:41.509 "model_number": "SPDK bdev Controller", 00:15:41.509 "max_namespaces": 32, 00:15:41.509 "min_cntlid": 1, 00:15:41.509 "max_cntlid": 65519, 00:15:41.509 "namespaces": [ 00:15:41.509 { 00:15:41.509 "nsid": 1, 00:15:41.509 "bdev_name": "Malloc1", 00:15:41.509 "name": "Malloc1", 00:15:41.509 "nguid": "43CE2521C5804A8DA3E614D9BD110D0D", 00:15:41.509 "uuid": "43ce2521-c580-4a8d-a3e6-14d9bd110d0d" 00:15:41.509 }, 00:15:41.509 { 00:15:41.509 "nsid": 2, 00:15:41.509 "bdev_name": "Malloc3", 00:15:41.509 "name": "Malloc3", 00:15:41.509 "nguid": "96DF842765C8410E8CFC118A1C924928", 00:15:41.509 "uuid": "96df8427-65c8-410e-8cfc-118a1c924928" 00:15:41.509 } 00:15:41.509 ] 00:15:41.509 }, 00:15:41.509 { 00:15:41.509 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:41.509 "subtype": "NVMe", 00:15:41.509 "listen_addresses": [ 00:15:41.509 { 00:15:41.509 "trtype": "VFIOUSER", 00:15:41.509 "adrfam": "IPv4", 00:15:41.509 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:41.509 "trsvcid": "0" 00:15:41.509 } 00:15:41.509 ], 00:15:41.509 "allow_any_host": true, 00:15:41.509 "hosts": [], 00:15:41.509 "serial_number": "SPDK2", 00:15:41.509 "model_number": "SPDK bdev Controller", 00:15:41.509 "max_namespaces": 32, 00:15:41.509 "min_cntlid": 1, 00:15:41.509 "max_cntlid": 65519, 00:15:41.509 "namespaces": [ 00:15:41.509 { 00:15:41.509 "nsid": 1, 00:15:41.509 "bdev_name": "Malloc2", 00:15:41.509 "name": "Malloc2", 00:15:41.509 "nguid": "641C0B091884444CB1F70D83492DCAFF", 00:15:41.509 "uuid": "641c0b09-1884-444c-b1f7-0d83492dcaff" 00:15:41.509 } 00:15:41.509 ] 00:15:41.509 } 00:15:41.509 ] 00:15:41.509 22:45:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3506009 00:15:41.509 22:45:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:41.509 22:45:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:41.509 22:45:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:41.509 22:45:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:41.509 [2024-07-26 22:45:33.986215] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:15:41.509 [2024-07-26 22:45:33.986262] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3506146 ] 00:15:41.509 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.768 [2024-07-26 22:45:34.019444] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:41.768 [2024-07-26 22:45:34.032111] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:41.768 [2024-07-26 22:45:34.032143] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fbb2cc3a000 00:15:41.768 [2024-07-26 22:45:34.033101] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:41.768 [2024-07-26 22:45:34.034108] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:41.768 [2024-07-26 22:45:34.035112] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:41.768 [2024-07-26 22:45:34.036117] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:41.768 [2024-07-26 22:45:34.037139] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:41.768 [2024-07-26 22:45:34.038126] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:41.768 [2024-07-26 22:45:34.039133] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:41.768 [2024-07-26 22:45:34.040139] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:41.768 [2024-07-26 22:45:34.041152] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:41.769 [2024-07-26 22:45:34.041174] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fbb2b9f0000 00:15:41.769 [2024-07-26 22:45:34.042314] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:41.769 [2024-07-26 22:45:34.057110] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:41.769 [2024-07-26 22:45:34.057143] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:41.769 [2024-07-26 22:45:34.062258] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:41.769 [2024-07-26 22:45:34.062313] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:41.769 [2024-07-26 22:45:34.062419] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:41.769 [2024-07-26 22:45:34.062445] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:41.769 [2024-07-26 22:45:34.062456] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:41.769 [2024-07-26 22:45:34.063270] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:41.769 [2024-07-26 22:45:34.063299] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:41.769 [2024-07-26 22:45:34.063313] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:41.769 [2024-07-26 22:45:34.064275] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:41.769 [2024-07-26 22:45:34.064297] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:41.769 [2024-07-26 22:45:34.064312] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:41.769 [2024-07-26 22:45:34.065286] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:41.769 [2024-07-26 22:45:34.065307] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:41.769 [2024-07-26 22:45:34.066292] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:41.769 [2024-07-26 22:45:34.066312] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:41.769 [2024-07-26 22:45:34.066322] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:41.769 [2024-07-26 22:45:34.066333] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:41.769 [2024-07-26 22:45:34.066458] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:41.769 [2024-07-26 22:45:34.066467] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:41.769 [2024-07-26 22:45:34.066476] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:41.769 [2024-07-26 22:45:34.067299] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:41.769 [2024-07-26 22:45:34.068305] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:41.769 [2024-07-26 22:45:34.069310] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:41.769 [2024-07-26 22:45:34.070304] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:41.769 [2024-07-26 22:45:34.070391] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:41.769 [2024-07-26 22:45:34.071325] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:41.769 [2024-07-26 22:45:34.071359] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:41.769 [2024-07-26 22:45:34.071373] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:41.769 [2024-07-26 22:45:34.071397] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:41.769 [2024-07-26 22:45:34.071414] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:41.769 [2024-07-26 22:45:34.071440] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:41.769 [2024-07-26 22:45:34.071450] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:41.769 [2024-07-26 22:45:34.071469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:41.769 [2024-07-26 22:45:34.076075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:41.769 [2024-07-26 22:45:34.076117] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:41.769 [2024-07-26 22:45:34.076129] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:41.769 [2024-07-26 22:45:34.076138] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:41.769 [2024-07-26 22:45:34.076146] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:41.769 [2024-07-26 22:45:34.076154] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:41.769 [2024-07-26 22:45:34.076162] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:41.769 [2024-07-26 22:45:34.076171] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:41.769 [2024-07-26 22:45:34.076184] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:41.769 [2024-07-26 22:45:34.076201] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:41.769 [2024-07-26 22:45:34.084070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:41.769 [2024-07-26 22:45:34.084096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.769 [2024-07-26 22:45:34.084110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.769 [2024-07-26 22:45:34.084122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.769 [2024-07-26 22:45:34.084135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.769 [2024-07-26 22:45:34.084144] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:41.769 [2024-07-26 22:45:34.084161] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:41.769 [2024-07-26 22:45:34.084177] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:41.769 [2024-07-26 22:45:34.092070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:41.769 [2024-07-26 22:45:34.092093] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:41.769 [2024-07-26 22:45:34.092104] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:41.769 [2024-07-26 22:45:34.092116] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:41.769 [2024-07-26 22:45:34.092130] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:41.769 [2024-07-26 22:45:34.092145] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:41.769 [2024-07-26 22:45:34.100071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:41.769 [2024-07-26 22:45:34.100147] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:41.769 [2024-07-26 22:45:34.100164] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:41.769 [2024-07-26 22:45:34.100177] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:41.769 [2024-07-26 22:45:34.100185] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:41.769 [2024-07-26 22:45:34.100195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:41.769 [2024-07-26 22:45:34.108071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:41.769 [2024-07-26 22:45:34.108094] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:41.769 [2024-07-26 22:45:34.108111] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:41.769 [2024-07-26 22:45:34.108127] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:41.769 [2024-07-26 22:45:34.108140] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:41.769 [2024-07-26 22:45:34.108148] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:41.769 [2024-07-26 22:45:34.108158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:41.769 [2024-07-26 22:45:34.116068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:41.769 [2024-07-26 22:45:34.116098] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:41.769 [2024-07-26 22:45:34.116116] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:41.770 [2024-07-26 22:45:34.116130] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:41.770 [2024-07-26 22:45:34.116138] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:41.770 [2024-07-26 22:45:34.116149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:41.770 [2024-07-26 22:45:34.124084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:41.770 [2024-07-26 22:45:34.124105] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:41.770 [2024-07-26 22:45:34.124124] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:41.770 [2024-07-26 22:45:34.124139] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:41.770 [2024-07-26 22:45:34.124151] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:41.770 [2024-07-26 22:45:34.124160] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:41.770 [2024-07-26 22:45:34.124169] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:41.770 [2024-07-26 22:45:34.124176] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:41.770 [2024-07-26 22:45:34.124185] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:41.770 [2024-07-26 22:45:34.124216] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:41.770 [2024-07-26 22:45:34.132069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:41.770 [2024-07-26 22:45:34.132096] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:41.770 [2024-07-26 22:45:34.140070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:41.770 [2024-07-26 22:45:34.140096] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:41.770 [2024-07-26 22:45:34.148067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:41.770 [2024-07-26 22:45:34.148094] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:41.770 [2024-07-26 22:45:34.156074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:41.770 [2024-07-26 22:45:34.156110] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:41.770 [2024-07-26 22:45:34.156120] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:41.770 [2024-07-26 22:45:34.156127] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:41.770 [2024-07-26 22:45:34.156133] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:41.770 [2024-07-26 22:45:34.156143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:41.770 [2024-07-26 22:45:34.156154] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:41.770 [2024-07-26 22:45:34.156162] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:41.770 [2024-07-26 22:45:34.156171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:41.770 [2024-07-26 22:45:34.156182] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:41.770 [2024-07-26 22:45:34.156190] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:41.770 [2024-07-26 22:45:34.156198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:41.770 [2024-07-26 22:45:34.156215] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:41.770 [2024-07-26 22:45:34.156223] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:41.770 [2024-07-26 22:45:34.156233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:41.770 [2024-07-26 22:45:34.164072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:41.770 [2024-07-26 22:45:34.164100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:41.770 [2024-07-26 22:45:34.164116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:41.770 [2024-07-26 22:45:34.164130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:41.770 ===================================================== 00:15:41.770 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:41.770 ===================================================== 00:15:41.770 Controller Capabilities/Features 00:15:41.770 ================================ 00:15:41.770 Vendor ID: 4e58 00:15:41.770 Subsystem Vendor ID: 4e58 00:15:41.770 Serial Number: SPDK2 00:15:41.770 Model Number: SPDK bdev Controller 00:15:41.770 Firmware Version: 24.05.1 00:15:41.770 Recommended Arb Burst: 6 00:15:41.770 IEEE OUI Identifier: 8d 6b 50 00:15:41.770 Multi-path I/O 00:15:41.770 May have multiple subsystem ports: Yes 00:15:41.770 May have multiple controllers: Yes 00:15:41.770 Associated with SR-IOV VF: No 00:15:41.770 Max Data Transfer Size: 131072 00:15:41.770 Max Number of Namespaces: 32 00:15:41.770 Max Number of I/O Queues: 127 00:15:41.770 NVMe Specification Version (VS): 1.3 00:15:41.770 NVMe Specification Version (Identify): 1.3 00:15:41.770 Maximum Queue Entries: 256 00:15:41.770 Contiguous Queues Required: Yes 00:15:41.770 Arbitration Mechanisms Supported 00:15:41.770 Weighted Round Robin: Not Supported 00:15:41.770 Vendor Specific: Not Supported 00:15:41.770 Reset Timeout: 15000 ms 00:15:41.770 Doorbell Stride: 4 bytes 00:15:41.770 NVM Subsystem Reset: Not Supported 00:15:41.770 Command Sets Supported 00:15:41.770 NVM Command Set: Supported 00:15:41.770 Boot Partition: Not Supported 00:15:41.770 Memory Page Size Minimum: 4096 bytes 00:15:41.770 Memory Page Size Maximum: 4096 bytes 00:15:41.770 Persistent Memory Region: Not Supported 00:15:41.770 Optional Asynchronous Events Supported 00:15:41.770 Namespace Attribute Notices: Supported 00:15:41.770 Firmware Activation Notices: Not Supported 00:15:41.770 ANA Change Notices: Not Supported 00:15:41.770 PLE Aggregate Log Change Notices: Not Supported 00:15:41.770 LBA Status Info Alert Notices: Not Supported 00:15:41.770 EGE Aggregate Log Change Notices: Not Supported 00:15:41.770 Normal NVM Subsystem Shutdown event: Not Supported 00:15:41.770 Zone Descriptor Change Notices: Not Supported 00:15:41.770 Discovery Log Change Notices: Not Supported 00:15:41.770 Controller Attributes 00:15:41.770 128-bit Host Identifier: Supported 00:15:41.770 Non-Operational Permissive Mode: Not Supported 00:15:41.770 NVM Sets: Not Supported 00:15:41.770 Read Recovery Levels: Not Supported 00:15:41.770 Endurance Groups: Not Supported 00:15:41.770 Predictable Latency Mode: Not Supported 00:15:41.770 Traffic Based Keep ALive: Not Supported 00:15:41.770 Namespace Granularity: Not Supported 00:15:41.770 SQ Associations: Not Supported 00:15:41.770 UUID List: Not Supported 00:15:41.770 Multi-Domain Subsystem: Not Supported 00:15:41.770 Fixed Capacity Management: Not Supported 00:15:41.770 Variable Capacity Management: Not Supported 00:15:41.770 Delete Endurance Group: Not Supported 00:15:41.770 Delete NVM Set: Not Supported 00:15:41.770 Extended LBA Formats Supported: Not Supported 00:15:41.770 Flexible Data Placement Supported: Not Supported 00:15:41.770 00:15:41.770 Controller Memory Buffer Support 00:15:41.770 ================================ 00:15:41.770 Supported: No 00:15:41.770 00:15:41.770 Persistent Memory Region Support 00:15:41.770 ================================ 00:15:41.770 Supported: No 00:15:41.770 00:15:41.770 Admin Command Set Attributes 00:15:41.770 ============================ 00:15:41.770 Security Send/Receive: Not Supported 00:15:41.770 Format NVM: Not Supported 00:15:41.770 Firmware Activate/Download: Not Supported 00:15:41.770 Namespace Management: Not Supported 00:15:41.770 Device Self-Test: Not Supported 00:15:41.770 Directives: Not Supported 00:15:41.770 NVMe-MI: Not Supported 00:15:41.770 Virtualization Management: Not Supported 00:15:41.770 Doorbell Buffer Config: Not Supported 00:15:41.770 Get LBA Status Capability: Not Supported 00:15:41.770 Command & Feature Lockdown Capability: Not Supported 00:15:41.770 Abort Command Limit: 4 00:15:41.770 Async Event Request Limit: 4 00:15:41.770 Number of Firmware Slots: N/A 00:15:41.770 Firmware Slot 1 Read-Only: N/A 00:15:41.770 Firmware Activation Without Reset: N/A 00:15:41.770 Multiple Update Detection Support: N/A 00:15:41.770 Firmware Update Granularity: No Information Provided 00:15:41.770 Per-Namespace SMART Log: No 00:15:41.770 Asymmetric Namespace Access Log Page: Not Supported 00:15:41.770 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:41.770 Command Effects Log Page: Supported 00:15:41.771 Get Log Page Extended Data: Supported 00:15:41.771 Telemetry Log Pages: Not Supported 00:15:41.771 Persistent Event Log Pages: Not Supported 00:15:41.771 Supported Log Pages Log Page: May Support 00:15:41.771 Commands Supported & Effects Log Page: Not Supported 00:15:41.771 Feature Identifiers & Effects Log Page:May Support 00:15:41.771 NVMe-MI Commands & Effects Log Page: May Support 00:15:41.771 Data Area 4 for Telemetry Log: Not Supported 00:15:41.771 Error Log Page Entries Supported: 128 00:15:41.771 Keep Alive: Supported 00:15:41.771 Keep Alive Granularity: 10000 ms 00:15:41.771 00:15:41.771 NVM Command Set Attributes 00:15:41.771 ========================== 00:15:41.771 Submission Queue Entry Size 00:15:41.771 Max: 64 00:15:41.771 Min: 64 00:15:41.771 Completion Queue Entry Size 00:15:41.771 Max: 16 00:15:41.771 Min: 16 00:15:41.771 Number of Namespaces: 32 00:15:41.771 Compare Command: Supported 00:15:41.771 Write Uncorrectable Command: Not Supported 00:15:41.771 Dataset Management Command: Supported 00:15:41.771 Write Zeroes Command: Supported 00:15:41.771 Set Features Save Field: Not Supported 00:15:41.771 Reservations: Not Supported 00:15:41.771 Timestamp: Not Supported 00:15:41.771 Copy: Supported 00:15:41.771 Volatile Write Cache: Present 00:15:41.771 Atomic Write Unit (Normal): 1 00:15:41.771 Atomic Write Unit (PFail): 1 00:15:41.771 Atomic Compare & Write Unit: 1 00:15:41.771 Fused Compare & Write: Supported 00:15:41.771 Scatter-Gather List 00:15:41.771 SGL Command Set: Supported (Dword aligned) 00:15:41.771 SGL Keyed: Not Supported 00:15:41.771 SGL Bit Bucket Descriptor: Not Supported 00:15:41.771 SGL Metadata Pointer: Not Supported 00:15:41.771 Oversized SGL: Not Supported 00:15:41.771 SGL Metadata Address: Not Supported 00:15:41.771 SGL Offset: Not Supported 00:15:41.771 Transport SGL Data Block: Not Supported 00:15:41.771 Replay Protected Memory Block: Not Supported 00:15:41.771 00:15:41.771 Firmware Slot Information 00:15:41.771 ========================= 00:15:41.771 Active slot: 1 00:15:41.771 Slot 1 Firmware Revision: 24.05.1 00:15:41.771 00:15:41.771 00:15:41.771 Commands Supported and Effects 00:15:41.771 ============================== 00:15:41.771 Admin Commands 00:15:41.771 -------------- 00:15:41.771 Get Log Page (02h): Supported 00:15:41.771 Identify (06h): Supported 00:15:41.771 Abort (08h): Supported 00:15:41.771 Set Features (09h): Supported 00:15:41.771 Get Features (0Ah): Supported 00:15:41.771 Asynchronous Event Request (0Ch): Supported 00:15:41.771 Keep Alive (18h): Supported 00:15:41.771 I/O Commands 00:15:41.771 ------------ 00:15:41.771 Flush (00h): Supported LBA-Change 00:15:41.771 Write (01h): Supported LBA-Change 00:15:41.771 Read (02h): Supported 00:15:41.771 Compare (05h): Supported 00:15:41.771 Write Zeroes (08h): Supported LBA-Change 00:15:41.771 Dataset Management (09h): Supported LBA-Change 00:15:41.771 Copy (19h): Supported LBA-Change 00:15:41.771 Unknown (79h): Supported LBA-Change 00:15:41.771 Unknown (7Ah): Supported 00:15:41.771 00:15:41.771 Error Log 00:15:41.771 ========= 00:15:41.771 00:15:41.771 Arbitration 00:15:41.771 =========== 00:15:41.771 Arbitration Burst: 1 00:15:41.771 00:15:41.771 Power Management 00:15:41.771 ================ 00:15:41.771 Number of Power States: 1 00:15:41.771 Current Power State: Power State #0 00:15:41.771 Power State #0: 00:15:41.771 Max Power: 0.00 W 00:15:41.771 Non-Operational State: Operational 00:15:41.771 Entry Latency: Not Reported 00:15:41.771 Exit Latency: Not Reported 00:15:41.771 Relative Read Throughput: 0 00:15:41.771 Relative Read Latency: 0 00:15:41.771 Relative Write Throughput: 0 00:15:41.771 Relative Write Latency: 0 00:15:41.771 Idle Power: Not Reported 00:15:41.771 Active Power: Not Reported 00:15:41.771 Non-Operational Permissive Mode: Not Supported 00:15:41.771 00:15:41.771 Health Information 00:15:41.771 ================== 00:15:41.771 Critical Warnings: 00:15:41.771 Available Spare Space: OK 00:15:41.771 Temperature: OK 00:15:41.771 Device Reliability: OK 00:15:41.771 Read Only: No 00:15:41.771 Volatile Memory Backup: OK 00:15:41.771 Current Temperature: 0 Kelvin[2024-07-26 22:45:34.164260] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:41.771 [2024-07-26 22:45:34.172086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:41.771 [2024-07-26 22:45:34.172135] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:41.771 [2024-07-26 22:45:34.172152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.771 [2024-07-26 22:45:34.172163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.771 [2024-07-26 22:45:34.172173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.771 [2024-07-26 22:45:34.172183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.771 [2024-07-26 22:45:34.172246] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:41.771 [2024-07-26 22:45:34.172266] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:41.771 [2024-07-26 22:45:34.173247] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:41.771 [2024-07-26 22:45:34.173321] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:41.771 [2024-07-26 22:45:34.173336] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:41.771 [2024-07-26 22:45:34.174265] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:41.771 [2024-07-26 22:45:34.174289] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:41.771 [2024-07-26 22:45:34.174340] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:41.771 [2024-07-26 22:45:34.177071] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:41.771 (-273 Celsius) 00:15:41.771 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:41.771 Available Spare: 0% 00:15:41.771 Available Spare Threshold: 0% 00:15:41.771 Life Percentage Used: 0% 00:15:41.771 Data Units Read: 0 00:15:41.771 Data Units Written: 0 00:15:41.771 Host Read Commands: 0 00:15:41.771 Host Write Commands: 0 00:15:41.771 Controller Busy Time: 0 minutes 00:15:41.771 Power Cycles: 0 00:15:41.771 Power On Hours: 0 hours 00:15:41.771 Unsafe Shutdowns: 0 00:15:41.771 Unrecoverable Media Errors: 0 00:15:41.771 Lifetime Error Log Entries: 0 00:15:41.771 Warning Temperature Time: 0 minutes 00:15:41.771 Critical Temperature Time: 0 minutes 00:15:41.771 00:15:41.771 Number of Queues 00:15:41.771 ================ 00:15:41.771 Number of I/O Submission Queues: 127 00:15:41.771 Number of I/O Completion Queues: 127 00:15:41.771 00:15:41.771 Active Namespaces 00:15:41.771 ================= 00:15:41.771 Namespace ID:1 00:15:41.771 Error Recovery Timeout: Unlimited 00:15:41.771 Command Set Identifier: NVM (00h) 00:15:41.771 Deallocate: Supported 00:15:41.771 Deallocated/Unwritten Error: Not Supported 00:15:41.771 Deallocated Read Value: Unknown 00:15:41.771 Deallocate in Write Zeroes: Not Supported 00:15:41.771 Deallocated Guard Field: 0xFFFF 00:15:41.771 Flush: Supported 00:15:41.771 Reservation: Supported 00:15:41.771 Namespace Sharing Capabilities: Multiple Controllers 00:15:41.771 Size (in LBAs): 131072 (0GiB) 00:15:41.771 Capacity (in LBAs): 131072 (0GiB) 00:15:41.771 Utilization (in LBAs): 131072 (0GiB) 00:15:41.771 NGUID: 641C0B091884444CB1F70D83492DCAFF 00:15:41.771 UUID: 641c0b09-1884-444c-b1f7-0d83492dcaff 00:15:41.771 Thin Provisioning: Not Supported 00:15:41.771 Per-NS Atomic Units: Yes 00:15:41.771 Atomic Boundary Size (Normal): 0 00:15:41.771 Atomic Boundary Size (PFail): 0 00:15:41.771 Atomic Boundary Offset: 0 00:15:41.771 Maximum Single Source Range Length: 65535 00:15:41.771 Maximum Copy Length: 65535 00:15:41.771 Maximum Source Range Count: 1 00:15:41.771 NGUID/EUI64 Never Reused: No 00:15:41.771 Namespace Write Protected: No 00:15:41.771 Number of LBA Formats: 1 00:15:41.771 Current LBA Format: LBA Format #00 00:15:41.771 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:41.771 00:15:41.771 22:45:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:41.771 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.029 [2024-07-26 22:45:34.397926] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:47.289 Initializing NVMe Controllers 00:15:47.289 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:47.289 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:47.289 Initialization complete. Launching workers. 00:15:47.289 ======================================================== 00:15:47.289 Latency(us) 00:15:47.289 Device Information : IOPS MiB/s Average min max 00:15:47.289 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35953.08 140.44 3559.46 1161.51 7436.37 00:15:47.289 ======================================================== 00:15:47.289 Total : 35953.08 140.44 3559.46 1161.51 7436.37 00:15:47.289 00:15:47.289 [2024-07-26 22:45:39.508411] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:47.289 22:45:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:47.289 EAL: No free 2048 kB hugepages reported on node 1 00:15:47.289 [2024-07-26 22:45:39.745105] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:52.552 Initializing NVMe Controllers 00:15:52.552 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:52.552 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:52.552 Initialization complete. Launching workers. 00:15:52.552 ======================================================== 00:15:52.552 Latency(us) 00:15:52.552 Device Information : IOPS MiB/s Average min max 00:15:52.552 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34128.71 133.32 3750.43 1176.96 8299.69 00:15:52.552 ======================================================== 00:15:52.552 Total : 34128.71 133.32 3750.43 1176.96 8299.69 00:15:52.552 00:15:52.552 [2024-07-26 22:45:44.768132] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:52.552 22:45:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:52.552 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.552 [2024-07-26 22:45:44.969929] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:57.817 [2024-07-26 22:45:50.121212] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:57.817 Initializing NVMe Controllers 00:15:57.817 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:57.817 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:57.817 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:57.817 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:57.817 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:57.817 Initialization complete. Launching workers. 00:15:57.817 Starting thread on core 2 00:15:57.817 Starting thread on core 3 00:15:57.817 Starting thread on core 1 00:15:57.817 22:45:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:57.817 EAL: No free 2048 kB hugepages reported on node 1 00:15:58.076 [2024-07-26 22:45:50.419715] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:02.297 [2024-07-26 22:45:54.310343] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:02.297 Initializing NVMe Controllers 00:16:02.297 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:02.297 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:02.297 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:02.297 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:02.297 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:02.297 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:02.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:02.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:02.297 Initialization complete. Launching workers. 00:16:02.297 Starting thread on core 1 with urgent priority queue 00:16:02.297 Starting thread on core 2 with urgent priority queue 00:16:02.297 Starting thread on core 3 with urgent priority queue 00:16:02.297 Starting thread on core 0 with urgent priority queue 00:16:02.297 SPDK bdev Controller (SPDK2 ) core 0: 5077.67 IO/s 19.69 secs/100000 ios 00:16:02.297 SPDK bdev Controller (SPDK2 ) core 1: 5072.67 IO/s 19.71 secs/100000 ios 00:16:02.297 SPDK bdev Controller (SPDK2 ) core 2: 5567.67 IO/s 17.96 secs/100000 ios 00:16:02.297 SPDK bdev Controller (SPDK2 ) core 3: 5842.00 IO/s 17.12 secs/100000 ios 00:16:02.297 ======================================================== 00:16:02.297 00:16:02.297 22:45:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:02.297 EAL: No free 2048 kB hugepages reported on node 1 00:16:02.297 [2024-07-26 22:45:54.601583] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:02.297 Initializing NVMe Controllers 00:16:02.297 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:02.297 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:02.297 Namespace ID: 1 size: 0GB 00:16:02.297 Initialization complete. 00:16:02.297 INFO: using host memory buffer for IO 00:16:02.297 Hello world! 00:16:02.297 [2024-07-26 22:45:54.610635] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:02.297 22:45:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:02.297 EAL: No free 2048 kB hugepages reported on node 1 00:16:02.554 [2024-07-26 22:45:54.889190] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:03.487 Initializing NVMe Controllers 00:16:03.487 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:03.487 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:03.487 Initialization complete. Launching workers. 00:16:03.487 submit (in ns) avg, min, max = 8114.6, 3518.9, 4018905.6 00:16:03.487 complete (in ns) avg, min, max = 26671.9, 2048.9, 4015791.1 00:16:03.487 00:16:03.487 Submit histogram 00:16:03.487 ================ 00:16:03.487 Range in us Cumulative Count 00:16:03.487 3.508 - 3.532: 0.1941% ( 26) 00:16:03.487 3.532 - 3.556: 0.7316% ( 72) 00:16:03.487 3.556 - 3.579: 2.5455% ( 243) 00:16:03.487 3.579 - 3.603: 6.3526% ( 510) 00:16:03.487 3.603 - 3.627: 13.9146% ( 1013) 00:16:03.487 3.627 - 3.650: 22.9845% ( 1215) 00:16:03.487 3.650 - 3.674: 33.8086% ( 1450) 00:16:03.487 3.674 - 3.698: 41.8558% ( 1078) 00:16:03.487 3.698 - 3.721: 49.9328% ( 1082) 00:16:03.487 3.721 - 3.745: 54.6730% ( 635) 00:16:03.487 3.745 - 3.769: 58.3532% ( 493) 00:16:03.487 3.769 - 3.793: 61.8020% ( 462) 00:16:03.487 3.793 - 3.816: 64.8701% ( 411) 00:16:03.487 3.816 - 3.840: 68.1845% ( 444) 00:16:03.487 3.840 - 3.864: 71.8274% ( 488) 00:16:03.487 3.864 - 3.887: 75.8809% ( 543) 00:16:03.487 3.887 - 3.911: 80.1881% ( 577) 00:16:03.487 3.911 - 3.935: 83.8161% ( 486) 00:16:03.487 3.935 - 3.959: 86.3019% ( 333) 00:16:03.487 3.959 - 3.982: 88.1084% ( 242) 00:16:03.487 3.982 - 4.006: 89.6611% ( 208) 00:16:03.487 4.006 - 4.030: 90.9600% ( 174) 00:16:03.487 4.030 - 4.053: 92.1842% ( 164) 00:16:03.487 4.053 - 4.077: 93.2069% ( 137) 00:16:03.487 4.077 - 4.101: 94.2147% ( 135) 00:16:03.487 4.101 - 4.124: 95.1926% ( 131) 00:16:03.487 4.124 - 4.148: 95.8271% ( 85) 00:16:03.487 4.148 - 4.172: 96.2451% ( 56) 00:16:03.487 4.172 - 4.196: 96.5363% ( 39) 00:16:03.487 4.196 - 4.219: 96.7080% ( 23) 00:16:03.487 4.219 - 4.243: 96.9319% ( 30) 00:16:03.487 4.243 - 4.267: 97.1335% ( 27) 00:16:03.487 4.267 - 4.290: 97.2678% ( 18) 00:16:03.487 4.290 - 4.314: 97.3873% ( 16) 00:16:03.487 4.314 - 4.338: 97.4918% ( 14) 00:16:03.487 4.338 - 4.361: 97.6187% ( 17) 00:16:03.487 4.361 - 4.385: 97.6411% ( 3) 00:16:03.487 4.385 - 4.409: 97.7008% ( 8) 00:16:03.487 4.409 - 4.433: 97.7531% ( 7) 00:16:03.487 4.433 - 4.456: 97.7755% ( 3) 00:16:03.487 4.456 - 4.480: 97.7829% ( 1) 00:16:03.487 4.480 - 4.504: 97.7979% ( 2) 00:16:03.487 4.599 - 4.622: 97.8053% ( 1) 00:16:03.487 4.670 - 4.693: 97.8128% ( 1) 00:16:03.487 4.693 - 4.717: 97.8202% ( 1) 00:16:03.487 4.717 - 4.741: 97.8277% ( 1) 00:16:03.487 4.788 - 4.812: 97.8426% ( 2) 00:16:03.487 4.859 - 4.883: 97.8576% ( 2) 00:16:03.487 4.883 - 4.907: 97.8874% ( 4) 00:16:03.487 4.907 - 4.930: 97.9173% ( 4) 00:16:03.487 4.930 - 4.954: 97.9322% ( 2) 00:16:03.487 4.954 - 4.978: 98.0069% ( 10) 00:16:03.487 4.978 - 5.001: 98.0442% ( 5) 00:16:03.487 5.001 - 5.025: 98.0741% ( 4) 00:16:03.487 5.025 - 5.049: 98.1114% ( 5) 00:16:03.487 5.049 - 5.073: 98.1188% ( 1) 00:16:03.487 5.073 - 5.096: 98.1786% ( 8) 00:16:03.487 5.096 - 5.120: 98.2383% ( 8) 00:16:03.487 5.120 - 5.144: 98.2831% ( 6) 00:16:03.487 5.144 - 5.167: 98.3503% ( 9) 00:16:03.488 5.167 - 5.191: 98.3726% ( 3) 00:16:03.488 5.191 - 5.215: 98.4025% ( 4) 00:16:03.488 5.215 - 5.239: 98.4100% ( 1) 00:16:03.488 5.239 - 5.262: 98.4473% ( 5) 00:16:03.488 5.262 - 5.286: 98.4622% ( 2) 00:16:03.488 5.286 - 5.310: 98.4846% ( 3) 00:16:03.488 5.333 - 5.357: 98.4921% ( 1) 00:16:03.488 5.357 - 5.381: 98.4996% ( 1) 00:16:03.488 5.381 - 5.404: 98.5070% ( 1) 00:16:03.488 5.404 - 5.428: 98.5145% ( 1) 00:16:03.488 5.428 - 5.452: 98.5294% ( 2) 00:16:03.488 5.452 - 5.476: 98.5369% ( 1) 00:16:03.488 5.499 - 5.523: 98.5443% ( 1) 00:16:03.488 5.523 - 5.547: 98.5593% ( 2) 00:16:03.488 5.547 - 5.570: 98.5667% ( 1) 00:16:03.488 5.618 - 5.641: 98.5742% ( 1) 00:16:03.488 5.641 - 5.665: 98.5817% ( 1) 00:16:03.488 5.736 - 5.760: 98.5891% ( 1) 00:16:03.488 5.902 - 5.926: 98.5966% ( 1) 00:16:03.488 6.258 - 6.305: 98.6115% ( 2) 00:16:03.488 6.305 - 6.353: 98.6190% ( 1) 00:16:03.488 6.353 - 6.400: 98.6265% ( 1) 00:16:03.488 6.684 - 6.732: 98.6339% ( 1) 00:16:03.488 6.874 - 6.921: 98.6563% ( 3) 00:16:03.488 6.921 - 6.969: 98.6638% ( 1) 00:16:03.488 6.969 - 7.016: 98.6787% ( 2) 00:16:03.488 7.111 - 7.159: 98.6862% ( 1) 00:16:03.488 7.159 - 7.206: 98.6936% ( 1) 00:16:03.488 7.206 - 7.253: 98.7011% ( 1) 00:16:03.488 7.253 - 7.301: 98.7086% ( 1) 00:16:03.488 7.443 - 7.490: 98.7160% ( 1) 00:16:03.488 7.490 - 7.538: 98.7310% ( 2) 00:16:03.488 7.680 - 7.727: 98.7459% ( 2) 00:16:03.488 7.727 - 7.775: 98.7608% ( 2) 00:16:03.488 7.822 - 7.870: 98.7758% ( 2) 00:16:03.488 7.917 - 7.964: 98.7832% ( 1) 00:16:03.488 8.059 - 8.107: 98.7907% ( 1) 00:16:03.488 8.154 - 8.201: 98.7981% ( 1) 00:16:03.488 8.201 - 8.249: 98.8056% ( 1) 00:16:03.488 8.249 - 8.296: 98.8131% ( 1) 00:16:03.488 8.296 - 8.344: 98.8205% ( 1) 00:16:03.488 8.344 - 8.391: 98.8280% ( 1) 00:16:03.488 8.439 - 8.486: 98.8355% ( 1) 00:16:03.488 8.486 - 8.533: 98.8429% ( 1) 00:16:03.488 8.533 - 8.581: 98.8504% ( 1) 00:16:03.488 8.581 - 8.628: 98.8579% ( 1) 00:16:03.488 8.676 - 8.723: 98.8653% ( 1) 00:16:03.488 8.723 - 8.770: 98.8803% ( 2) 00:16:03.488 8.818 - 8.865: 98.8877% ( 1) 00:16:03.488 8.865 - 8.913: 98.8952% ( 1) 00:16:03.488 8.913 - 8.960: 98.9027% ( 1) 00:16:03.488 9.150 - 9.197: 98.9101% ( 1) 00:16:03.488 9.576 - 9.624: 98.9176% ( 1) 00:16:03.488 11.046 - 11.093: 98.9251% ( 1) 00:16:03.488 11.520 - 11.567: 98.9325% ( 1) 00:16:03.488 11.567 - 11.615: 98.9400% ( 1) 00:16:03.488 11.757 - 11.804: 98.9474% ( 1) 00:16:03.488 12.326 - 12.421: 98.9549% ( 1) 00:16:03.488 13.179 - 13.274: 98.9624% ( 1) 00:16:03.488 13.559 - 13.653: 98.9773% ( 2) 00:16:03.488 13.843 - 13.938: 98.9922% ( 2) 00:16:03.488 14.127 - 14.222: 98.9997% ( 1) 00:16:03.488 14.507 - 14.601: 99.0072% ( 1) 00:16:03.488 14.601 - 14.696: 99.0146% ( 1) 00:16:03.488 14.791 - 14.886: 99.0221% ( 1) 00:16:03.488 17.256 - 17.351: 99.0296% ( 1) 00:16:03.488 17.351 - 17.446: 99.0893% ( 8) 00:16:03.488 17.541 - 17.636: 99.1266% ( 5) 00:16:03.488 17.636 - 17.730: 99.1789% ( 7) 00:16:03.488 17.730 - 17.825: 99.1863% ( 1) 00:16:03.488 17.825 - 17.920: 99.2236% ( 5) 00:16:03.488 17.920 - 18.015: 99.2460% ( 3) 00:16:03.488 18.015 - 18.110: 99.3282% ( 11) 00:16:03.488 18.110 - 18.204: 99.4177% ( 12) 00:16:03.488 18.204 - 18.299: 99.4999% ( 11) 00:16:03.488 18.299 - 18.394: 99.5745% ( 10) 00:16:03.488 18.394 - 18.489: 99.6342% ( 8) 00:16:03.488 18.489 - 18.584: 99.6790% ( 6) 00:16:03.488 18.584 - 18.679: 99.7014% ( 3) 00:16:03.488 18.679 - 18.773: 99.7313% ( 4) 00:16:03.488 18.773 - 18.868: 99.7537% ( 3) 00:16:03.488 18.868 - 18.963: 99.7686% ( 2) 00:16:03.488 18.963 - 19.058: 99.7761% ( 1) 00:16:03.488 19.058 - 19.153: 99.7910% ( 2) 00:16:03.488 19.247 - 19.342: 99.8059% ( 2) 00:16:03.488 19.437 - 19.532: 99.8134% ( 1) 00:16:03.488 19.532 - 19.627: 99.8358% ( 3) 00:16:03.488 19.721 - 19.816: 99.8432% ( 1) 00:16:03.488 20.006 - 20.101: 99.8507% ( 1) 00:16:03.488 20.101 - 20.196: 99.8582% ( 1) 00:16:03.488 20.670 - 20.764: 99.8656% ( 1) 00:16:03.488 22.566 - 22.661: 99.8731% ( 1) 00:16:03.488 22.850 - 22.945: 99.8806% ( 1) 00:16:03.488 26.738 - 26.927: 99.8880% ( 1) 00:16:03.488 27.307 - 27.496: 99.8955% ( 1) 00:16:03.488 3980.705 - 4004.978: 99.9627% ( 9) 00:16:03.488 4004.978 - 4029.250: 100.0000% ( 5) 00:16:03.488 00:16:03.488 Complete histogram 00:16:03.488 ================== 00:16:03.488 Range in us Cumulative Count 00:16:03.488 2.039 - 2.050: 0.0299% ( 4) 00:16:03.488 2.050 - 2.062: 13.8474% ( 1851) 00:16:03.488 2.062 - 2.074: 34.5850% ( 2778) 00:16:03.488 2.074 - 2.086: 37.7725% ( 427) 00:16:03.488 2.086 - 2.098: 49.1789% ( 1528) 00:16:03.488 2.098 - 2.110: 59.8537% ( 1430) 00:16:03.488 2.110 - 2.121: 62.0185% ( 290) 00:16:03.488 2.121 - 2.133: 70.2299% ( 1100) 00:16:03.488 2.133 - 2.145: 73.7534% ( 472) 00:16:03.488 2.145 - 2.157: 74.9030% ( 154) 00:16:03.488 2.157 - 2.169: 78.6727% ( 505) 00:16:03.488 2.169 - 2.181: 80.5912% ( 257) 00:16:03.488 2.181 - 2.193: 81.5169% ( 124) 00:16:03.488 2.193 - 2.204: 85.5405% ( 539) 00:16:03.488 2.204 - 2.216: 87.8397% ( 308) 00:16:03.488 2.216 - 2.228: 90.1239% ( 306) 00:16:03.488 2.228 - 2.240: 92.2663% ( 287) 00:16:03.488 2.240 - 2.252: 93.6772% ( 189) 00:16:03.488 2.252 - 2.264: 93.9758% ( 40) 00:16:03.488 2.264 - 2.276: 94.2595% ( 38) 00:16:03.488 2.276 - 2.287: 94.7596% ( 67) 00:16:03.488 2.287 - 2.299: 95.2971% ( 72) 00:16:03.488 2.299 - 2.311: 95.5061% ( 28) 00:16:03.488 2.311 - 2.323: 95.6032% ( 13) 00:16:03.488 2.323 - 2.335: 95.6703% ( 9) 00:16:03.488 2.335 - 2.347: 95.8271% ( 21) 00:16:03.488 2.347 - 2.359: 96.1033% ( 37) 00:16:03.488 2.359 - 2.370: 96.6035% ( 67) 00:16:03.488 2.370 - 2.382: 97.1111% ( 68) 00:16:03.488 2.382 - 2.394: 97.4843% ( 50) 00:16:03.488 2.394 - 2.406: 97.7605% ( 37) 00:16:03.488 2.406 - 2.418: 97.8650% ( 14) 00:16:03.488 2.418 - 2.430: 97.9546% ( 12) 00:16:03.488 2.430 - 2.441: 98.0666% ( 15) 00:16:03.488 2.441 - 2.453: 98.1487% ( 11) 00:16:03.488 2.453 - 2.465: 98.2457% ( 13) 00:16:03.488 2.465 - 2.477: 98.3428% ( 13) 00:16:03.488 2.477 - 2.489: 98.3876% ( 6) 00:16:03.488 2.489 - 2.501: 98.4025% ( 2) 00:16:03.488 2.501 - 2.513: 98.4249% ( 3) 00:16:03.488 2.513 - 2.524: 98.4548% ( 4) 00:16:03.488 2.524 - 2.536: 98.4846% ( 4) 00:16:03.488 2.536 - 2.548: 98.5219% ( 5) 00:16:03.488 2.548 - 2.560: 98.5443% ( 3) 00:16:03.488 2.572 - 2.584: 98.5518% ( 1) 00:16:03.488 2.667 - 2.679: 98.5593% ( 1) 00:16:03.488 2.690 - 2.702: 98.5667% ( 1) 00:16:03.488 2.750 - 2.761: 98.5742% ( 1) 00:16:03.488 2.963 - 2.975: 98.5817% ( 1) 00:16:03.488 3.342 - 3.366: 98.5891% ( 1) 00:16:03.488 3.437 - 3.461: 98.5966% ( 1) 00:16:03.488 3.484 - 3.508: 98.6041% ( 1) 00:16:03.488 3.532 - 3.556: 98.6115% ( 1) 00:16:03.488 3.556 - 3.579: 98.6190% ( 1) 00:16:03.488 3.603 - 3.627: 98.6339% ( 2) 00:16:03.488 3.627 - 3.650: 98.6638% ( 4) 00:16:03.488 3.650 - 3.674: 98.6712% ( 1) 00:16:03.488 3.674 - 3.698: 98.6787% ( 1) 00:16:03.488 3.721 - 3.745: 98.6862% ( 1) 00:16:03.488 3.793 - 3.816: 98.6936% ( 1) 00:16:03.488 3.816 - 3.840: 98.7011% ( 1) 00:16:03.488 3.887 - 3.911: 98.7160% ( 2) 00:16:03.488 3.982 - 4.006: 98.7235% ( 1) 00:16:03.488 4.053 - 4.077: 98.7310% ( 1) 00:16:03.488 4.196 - 4.219: 98.7384% ( 1) 00:16:03.488 5.286 - 5.310: 98.7459% ( 1) 00:16:03.488 5.357 - 5.381: 98.7534% ( 1) 00:16:03.488 5.476 - 5.499: 98.7608% ( 1) 00:16:03.488 5.547 - 5.570: 98.7683% ( 1) 00:16:03.488 5.689 - 5.713: 98.7832% ( 2) 00:16:03.488 5.902 - 5.926: 98.7907% ( 1) 00:16:03.488 5.926 - 5.950: 98.7981% ( 1) 00:16:03.488 6.021 - 6.044: 98.8056% ( 1) 00:16:03.488 6.068 - 6.116: 98.8131% ( 1) 00:16:03.488 6.163 - 6.210: 98.8280% ( 2) 00:16:03.488 6.305 - 6.353: 98.8429% ( 2) 00:16:03.488 6.353 - 6.400: 98.8579% ( 2) 00:16:03.488 6.495 - 6.542: 98.8653% ( 1) 00:16:03.488 6.637 - 6.684: 98.8728% ( 1) 00:16:03.488 6.921 - 6.969: 98.8803% ( 1) 00:16:03.489 7.443 - 7.490: 98.8877% ( 1) 00:16:03.489 15.550 - 15.644: 98.9027% ( 2) 00:16:03.489 15.644 - 15.739: 98.9176% ( 2) 00:16:03.489 15.739 - 15.834: 98.9325% ( 2) 00:16:03.489 15.929 - 16.024: 98.9400% ( 1) 00:16:03.489 16.024 - 16.119: 98.9549% ( 2) 00:16:03.489 16.119 - 16.213: 98.9698% ( 2) 00:16:03.489 16.213 - 16.308: 98.9848% ( 2) 00:16:03.489 16.308 - 16.403: 99.0072% ( 3) 00:16:03.489 16.403 - 16.498: 99.0594% ( 7) 00:16:03.489 16.498 - 16.593: 99.0967% ( 5) 00:16:03.489 16.593 - 16.687: 99.1266% ( 4) 00:16:03.489 16.687 - 16.782: 99.1490% ( 3) 00:16:03.489 16.782 - 16.877: 99.1863% ( 5) 00:16:03.489 16.877 - 16.972: 9[2024-07-26 22:45:55.988788] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:03.746 9.2013% ( 2) 00:16:03.746 16.972 - 17.067: 99.2087% ( 1) 00:16:03.746 17.067 - 17.161: 99.2162% ( 1) 00:16:03.746 17.161 - 17.256: 99.2460% ( 4) 00:16:03.746 17.256 - 17.351: 99.2535% ( 1) 00:16:03.746 17.351 - 17.446: 99.2610% ( 1) 00:16:03.746 17.446 - 17.541: 99.2684% ( 1) 00:16:03.746 17.541 - 17.636: 99.2834% ( 2) 00:16:03.746 17.825 - 17.920: 99.2908% ( 1) 00:16:03.746 18.015 - 18.110: 99.3058% ( 2) 00:16:03.746 18.204 - 18.299: 99.3282% ( 3) 00:16:03.746 18.773 - 18.868: 99.3356% ( 1) 00:16:03.746 18.868 - 18.963: 99.3506% ( 2) 00:16:03.746 19.058 - 19.153: 99.3580% ( 1) 00:16:03.746 19.342 - 19.437: 99.3655% ( 1) 00:16:03.746 19.816 - 19.911: 99.3729% ( 1) 00:16:03.746 19.911 - 20.006: 99.3804% ( 1) 00:16:03.746 20.670 - 20.764: 99.3879% ( 1) 00:16:03.746 3616.616 - 3640.889: 99.3953% ( 1) 00:16:03.746 3980.705 - 4004.978: 99.8134% ( 56) 00:16:03.746 4004.978 - 4029.250: 100.0000% ( 25) 00:16:03.746 00:16:03.746 22:45:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:03.746 22:45:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:03.746 22:45:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:03.746 22:45:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:03.746 22:45:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:04.003 [ 00:16:04.003 { 00:16:04.003 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:04.003 "subtype": "Discovery", 00:16:04.003 "listen_addresses": [], 00:16:04.003 "allow_any_host": true, 00:16:04.003 "hosts": [] 00:16:04.003 }, 00:16:04.003 { 00:16:04.003 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:04.003 "subtype": "NVMe", 00:16:04.003 "listen_addresses": [ 00:16:04.003 { 00:16:04.003 "trtype": "VFIOUSER", 00:16:04.003 "adrfam": "IPv4", 00:16:04.003 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:04.003 "trsvcid": "0" 00:16:04.003 } 00:16:04.003 ], 00:16:04.003 "allow_any_host": true, 00:16:04.003 "hosts": [], 00:16:04.003 "serial_number": "SPDK1", 00:16:04.003 "model_number": "SPDK bdev Controller", 00:16:04.003 "max_namespaces": 32, 00:16:04.003 "min_cntlid": 1, 00:16:04.003 "max_cntlid": 65519, 00:16:04.003 "namespaces": [ 00:16:04.003 { 00:16:04.003 "nsid": 1, 00:16:04.003 "bdev_name": "Malloc1", 00:16:04.003 "name": "Malloc1", 00:16:04.003 "nguid": "43CE2521C5804A8DA3E614D9BD110D0D", 00:16:04.003 "uuid": "43ce2521-c580-4a8d-a3e6-14d9bd110d0d" 00:16:04.003 }, 00:16:04.003 { 00:16:04.003 "nsid": 2, 00:16:04.003 "bdev_name": "Malloc3", 00:16:04.004 "name": "Malloc3", 00:16:04.004 "nguid": "96DF842765C8410E8CFC118A1C924928", 00:16:04.004 "uuid": "96df8427-65c8-410e-8cfc-118a1c924928" 00:16:04.004 } 00:16:04.004 ] 00:16:04.004 }, 00:16:04.004 { 00:16:04.004 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:04.004 "subtype": "NVMe", 00:16:04.004 "listen_addresses": [ 00:16:04.004 { 00:16:04.004 "trtype": "VFIOUSER", 00:16:04.004 "adrfam": "IPv4", 00:16:04.004 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:04.004 "trsvcid": "0" 00:16:04.004 } 00:16:04.004 ], 00:16:04.004 "allow_any_host": true, 00:16:04.004 "hosts": [], 00:16:04.004 "serial_number": "SPDK2", 00:16:04.004 "model_number": "SPDK bdev Controller", 00:16:04.004 "max_namespaces": 32, 00:16:04.004 "min_cntlid": 1, 00:16:04.004 "max_cntlid": 65519, 00:16:04.004 "namespaces": [ 00:16:04.004 { 00:16:04.004 "nsid": 1, 00:16:04.004 "bdev_name": "Malloc2", 00:16:04.004 "name": "Malloc2", 00:16:04.004 "nguid": "641C0B091884444CB1F70D83492DCAFF", 00:16:04.004 "uuid": "641c0b09-1884-444c-b1f7-0d83492dcaff" 00:16:04.004 } 00:16:04.004 ] 00:16:04.004 } 00:16:04.004 ] 00:16:04.004 22:45:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:04.004 22:45:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3508789 00:16:04.004 22:45:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:04.004 22:45:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:04.004 22:45:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:16:04.004 22:45:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:04.004 22:45:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:04.004 22:45:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:16:04.004 22:45:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:04.004 22:45:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:04.004 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.004 [2024-07-26 22:45:56.430332] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:04.262 Malloc4 00:16:04.262 22:45:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:04.519 [2024-07-26 22:45:56.787935] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:04.519 22:45:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:04.519 Asynchronous Event Request test 00:16:04.519 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:04.519 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:04.519 Registering asynchronous event callbacks... 00:16:04.519 Starting namespace attribute notice tests for all controllers... 00:16:04.519 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:04.519 aer_cb - Changed Namespace 00:16:04.519 Cleaning up... 00:16:04.777 [ 00:16:04.777 { 00:16:04.777 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:04.777 "subtype": "Discovery", 00:16:04.777 "listen_addresses": [], 00:16:04.777 "allow_any_host": true, 00:16:04.777 "hosts": [] 00:16:04.777 }, 00:16:04.777 { 00:16:04.777 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:04.777 "subtype": "NVMe", 00:16:04.777 "listen_addresses": [ 00:16:04.777 { 00:16:04.777 "trtype": "VFIOUSER", 00:16:04.777 "adrfam": "IPv4", 00:16:04.777 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:04.777 "trsvcid": "0" 00:16:04.777 } 00:16:04.777 ], 00:16:04.777 "allow_any_host": true, 00:16:04.777 "hosts": [], 00:16:04.777 "serial_number": "SPDK1", 00:16:04.777 "model_number": "SPDK bdev Controller", 00:16:04.777 "max_namespaces": 32, 00:16:04.777 "min_cntlid": 1, 00:16:04.777 "max_cntlid": 65519, 00:16:04.777 "namespaces": [ 00:16:04.777 { 00:16:04.777 "nsid": 1, 00:16:04.777 "bdev_name": "Malloc1", 00:16:04.777 "name": "Malloc1", 00:16:04.777 "nguid": "43CE2521C5804A8DA3E614D9BD110D0D", 00:16:04.777 "uuid": "43ce2521-c580-4a8d-a3e6-14d9bd110d0d" 00:16:04.777 }, 00:16:04.777 { 00:16:04.777 "nsid": 2, 00:16:04.777 "bdev_name": "Malloc3", 00:16:04.777 "name": "Malloc3", 00:16:04.777 "nguid": "96DF842765C8410E8CFC118A1C924928", 00:16:04.777 "uuid": "96df8427-65c8-410e-8cfc-118a1c924928" 00:16:04.777 } 00:16:04.777 ] 00:16:04.777 }, 00:16:04.777 { 00:16:04.777 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:04.777 "subtype": "NVMe", 00:16:04.777 "listen_addresses": [ 00:16:04.777 { 00:16:04.777 "trtype": "VFIOUSER", 00:16:04.777 "adrfam": "IPv4", 00:16:04.777 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:04.777 "trsvcid": "0" 00:16:04.777 } 00:16:04.777 ], 00:16:04.777 "allow_any_host": true, 00:16:04.777 "hosts": [], 00:16:04.777 "serial_number": "SPDK2", 00:16:04.777 "model_number": "SPDK bdev Controller", 00:16:04.777 "max_namespaces": 32, 00:16:04.777 "min_cntlid": 1, 00:16:04.777 "max_cntlid": 65519, 00:16:04.777 "namespaces": [ 00:16:04.777 { 00:16:04.777 "nsid": 1, 00:16:04.777 "bdev_name": "Malloc2", 00:16:04.777 "name": "Malloc2", 00:16:04.777 "nguid": "641C0B091884444CB1F70D83492DCAFF", 00:16:04.777 "uuid": "641c0b09-1884-444c-b1f7-0d83492dcaff" 00:16:04.777 }, 00:16:04.777 { 00:16:04.777 "nsid": 2, 00:16:04.777 "bdev_name": "Malloc4", 00:16:04.777 "name": "Malloc4", 00:16:04.777 "nguid": "3593F7575A7A4DAEA4F8FA4D14A74A39", 00:16:04.777 "uuid": "3593f757-5a7a-4dae-a4f8-fa4d14a74a39" 00:16:04.777 } 00:16:04.777 ] 00:16:04.777 } 00:16:04.777 ] 00:16:04.777 22:45:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3508789 00:16:04.777 22:45:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:04.777 22:45:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3502938 00:16:04.777 22:45:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 3502938 ']' 00:16:04.777 22:45:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 3502938 00:16:04.777 22:45:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:16:04.777 22:45:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:04.777 22:45:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3502938 00:16:04.777 22:45:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:04.777 22:45:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:04.777 22:45:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3502938' 00:16:04.777 killing process with pid 3502938 00:16:04.777 22:45:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 3502938 00:16:04.777 22:45:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 3502938 00:16:05.035 22:45:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:05.035 22:45:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:05.035 22:45:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:05.035 22:45:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:05.035 22:45:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:05.035 22:45:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3508930 00:16:05.035 22:45:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:05.035 22:45:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3508930' 00:16:05.035 Process pid: 3508930 00:16:05.035 22:45:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:05.035 22:45:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3508930 00:16:05.035 22:45:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 3508930 ']' 00:16:05.035 22:45:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.035 22:45:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:05.035 22:45:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.035 22:45:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:05.035 22:45:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:05.035 [2024-07-26 22:45:57.445981] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:05.035 [2024-07-26 22:45:57.446967] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:16:05.035 [2024-07-26 22:45:57.447035] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:05.035 EAL: No free 2048 kB hugepages reported on node 1 00:16:05.035 [2024-07-26 22:45:57.510757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:05.293 [2024-07-26 22:45:57.605886] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:05.293 [2024-07-26 22:45:57.605948] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:05.293 [2024-07-26 22:45:57.605981] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:05.293 [2024-07-26 22:45:57.605995] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:05.293 [2024-07-26 22:45:57.606007] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:05.293 [2024-07-26 22:45:57.606086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:05.293 [2024-07-26 22:45:57.606131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:05.293 [2024-07-26 22:45:57.606229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:05.293 [2024-07-26 22:45:57.606232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.293 [2024-07-26 22:45:57.702297] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:05.293 [2024-07-26 22:45:57.702468] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:05.294 [2024-07-26 22:45:57.702774] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:05.294 [2024-07-26 22:45:57.703314] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:05.294 [2024-07-26 22:45:57.703553] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:05.294 22:45:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:05.294 22:45:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:16:05.294 22:45:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:06.664 22:45:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:06.664 22:45:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:06.664 22:45:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:06.664 22:45:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:06.664 22:45:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:06.664 22:45:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:06.922 Malloc1 00:16:06.922 22:45:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:07.181 22:45:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:07.439 22:45:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:07.696 22:45:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:07.696 22:45:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:07.696 22:46:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:07.954 Malloc2 00:16:07.954 22:46:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:08.212 22:46:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:08.470 22:46:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:08.727 22:46:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:08.727 22:46:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3508930 00:16:08.727 22:46:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 3508930 ']' 00:16:08.727 22:46:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 3508930 00:16:08.727 22:46:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:16:08.727 22:46:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:08.727 22:46:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3508930 00:16:08.728 22:46:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:08.728 22:46:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:08.728 22:46:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3508930' 00:16:08.728 killing process with pid 3508930 00:16:08.728 22:46:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 3508930 00:16:08.728 22:46:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 3508930 00:16:08.987 22:46:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:08.987 22:46:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:08.987 00:16:08.987 real 0m53.101s 00:16:08.987 user 3m29.869s 00:16:08.987 sys 0m4.292s 00:16:08.987 22:46:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:08.987 22:46:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:08.987 ************************************ 00:16:08.987 END TEST nvmf_vfio_user 00:16:08.987 ************************************ 00:16:08.987 22:46:01 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:08.987 22:46:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:08.987 22:46:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:08.987 22:46:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:08.987 ************************************ 00:16:08.987 START TEST nvmf_vfio_user_nvme_compliance 00:16:08.987 ************************************ 00:16:08.987 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:08.987 * Looking for test storage... 00:16:08.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:08.987 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:08.987 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:08.987 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:08.987 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:08.987 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:08.987 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:08.987 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:08.987 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:08.987 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:08.987 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:08.987 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:08.987 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:08.987 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:08.987 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:08.987 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:08.987 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:08.987 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:08.987 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:08.987 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:08.987 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:08.987 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:08.987 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:08.987 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.987 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.987 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.987 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:08.988 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.988 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:16:08.988 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:08.988 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:08.988 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:08.988 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:08.988 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:08.988 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:08.988 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:08.988 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:08.988 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:08.988 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:08.988 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:08.988 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:08.988 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:08.988 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3509407 00:16:08.988 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:08.988 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3509407' 00:16:08.988 Process pid: 3509407 00:16:08.988 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:08.988 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3509407 00:16:08.988 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 3509407 ']' 00:16:08.988 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.988 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:08.988 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.988 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:08.988 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:09.246 [2024-07-26 22:46:01.503244] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:16:09.246 [2024-07-26 22:46:01.503318] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:09.246 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.246 [2024-07-26 22:46:01.568486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:09.246 [2024-07-26 22:46:01.663877] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:09.246 [2024-07-26 22:46:01.663931] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:09.246 [2024-07-26 22:46:01.663956] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:09.246 [2024-07-26 22:46:01.663971] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:09.246 [2024-07-26 22:46:01.663983] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:09.246 [2024-07-26 22:46:01.665087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.246 [2024-07-26 22:46:01.665128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:09.246 [2024-07-26 22:46:01.665131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.504 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:09.504 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:16:09.504 22:46:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:10.437 22:46:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:10.437 22:46:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:10.437 22:46:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:10.437 22:46:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.437 22:46:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:10.437 22:46:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.437 22:46:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:10.437 22:46:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:10.437 22:46:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.437 22:46:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:10.437 malloc0 00:16:10.437 22:46:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.437 22:46:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:10.437 22:46:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.437 22:46:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:10.437 22:46:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.437 22:46:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:10.437 22:46:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.437 22:46:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:10.437 22:46:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.437 22:46:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:10.437 22:46:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.437 22:46:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:10.437 22:46:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.437 22:46:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:10.437 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.694 00:16:10.694 00:16:10.694 CUnit - A unit testing framework for C - Version 2.1-3 00:16:10.694 http://cunit.sourceforge.net/ 00:16:10.694 00:16:10.694 00:16:10.694 Suite: nvme_compliance 00:16:10.694 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-26 22:46:03.022012] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:10.694 [2024-07-26 22:46:03.023557] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:10.694 [2024-07-26 22:46:03.023583] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:10.694 [2024-07-26 22:46:03.023595] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:10.694 [2024-07-26 22:46:03.025034] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:10.694 passed 00:16:10.694 Test: admin_identify_ctrlr_verify_fused ...[2024-07-26 22:46:03.110680] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:10.694 [2024-07-26 22:46:03.113705] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:10.694 passed 00:16:10.955 Test: admin_identify_ns ...[2024-07-26 22:46:03.200633] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:10.955 [2024-07-26 22:46:03.260093] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:10.955 [2024-07-26 22:46:03.268091] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:10.955 [2024-07-26 22:46:03.289206] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:10.955 passed 00:16:10.955 Test: admin_get_features_mandatory_features ...[2024-07-26 22:46:03.372855] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:10.955 [2024-07-26 22:46:03.375878] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:10.955 passed 00:16:11.213 Test: admin_get_features_optional_features ...[2024-07-26 22:46:03.460434] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:11.213 [2024-07-26 22:46:03.463454] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:11.213 passed 00:16:11.213 Test: admin_set_features_number_of_queues ...[2024-07-26 22:46:03.545627] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:11.213 [2024-07-26 22:46:03.650203] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:11.213 passed 00:16:11.471 Test: admin_get_log_page_mandatory_logs ...[2024-07-26 22:46:03.733874] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:11.471 [2024-07-26 22:46:03.736896] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:11.471 passed 00:16:11.471 Test: admin_get_log_page_with_lpo ...[2024-07-26 22:46:03.820143] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:11.471 [2024-07-26 22:46:03.890074] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:11.471 [2024-07-26 22:46:03.903156] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:11.471 passed 00:16:11.729 Test: fabric_property_get ...[2024-07-26 22:46:03.987015] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:11.729 [2024-07-26 22:46:03.988329] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:11.729 [2024-07-26 22:46:03.990066] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:11.729 passed 00:16:11.729 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-26 22:46:04.074662] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:11.729 [2024-07-26 22:46:04.075946] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:11.729 [2024-07-26 22:46:04.077681] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:11.729 passed 00:16:11.729 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-26 22:46:04.160845] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:11.987 [2024-07-26 22:46:04.244083] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:11.987 [2024-07-26 22:46:04.260071] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:11.987 [2024-07-26 22:46:04.265175] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:11.987 passed 00:16:11.987 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-26 22:46:04.348318] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:11.987 [2024-07-26 22:46:04.349619] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:11.987 [2024-07-26 22:46:04.351342] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:11.987 passed 00:16:11.987 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-26 22:46:04.434550] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:12.245 [2024-07-26 22:46:04.513067] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:12.245 [2024-07-26 22:46:04.537099] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:12.245 [2024-07-26 22:46:04.542175] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:12.245 passed 00:16:12.245 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-26 22:46:04.624753] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:12.245 [2024-07-26 22:46:04.626042] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:12.245 [2024-07-26 22:46:04.626097] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:12.245 [2024-07-26 22:46:04.627777] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:12.245 passed 00:16:12.245 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-26 22:46:04.709645] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:12.503 [2024-07-26 22:46:04.801070] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:12.503 [2024-07-26 22:46:04.809082] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:12.503 [2024-07-26 22:46:04.817069] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:12.503 [2024-07-26 22:46:04.825066] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:12.503 [2024-07-26 22:46:04.854184] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:12.503 passed 00:16:12.503 Test: admin_create_io_sq_verify_pc ...[2024-07-26 22:46:04.937781] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:12.503 [2024-07-26 22:46:04.954084] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:12.503 [2024-07-26 22:46:04.972140] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:12.503 passed 00:16:12.760 Test: admin_create_io_qp_max_qps ...[2024-07-26 22:46:05.056711] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:13.693 [2024-07-26 22:46:06.151091] nvme_ctrlr.c:5342:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:14.258 [2024-07-26 22:46:06.541394] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:14.258 passed 00:16:14.258 Test: admin_create_io_sq_shared_cq ...[2024-07-26 22:46:06.622566] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:14.258 [2024-07-26 22:46:06.754085] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:14.517 [2024-07-26 22:46:06.791158] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:14.517 passed 00:16:14.517 00:16:14.517 Run Summary: Type Total Ran Passed Failed Inactive 00:16:14.517 suites 1 1 n/a 0 0 00:16:14.517 tests 18 18 18 0 0 00:16:14.517 asserts 360 360 360 0 n/a 00:16:14.517 00:16:14.517 Elapsed time = 1.562 seconds 00:16:14.517 22:46:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3509407 00:16:14.517 22:46:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 3509407 ']' 00:16:14.517 22:46:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 3509407 00:16:14.517 22:46:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:16:14.517 22:46:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:14.517 22:46:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3509407 00:16:14.517 22:46:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:14.517 22:46:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:14.517 22:46:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3509407' 00:16:14.517 killing process with pid 3509407 00:16:14.517 22:46:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 3509407 00:16:14.517 22:46:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 3509407 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:14.776 00:16:14.776 real 0m5.745s 00:16:14.776 user 0m16.145s 00:16:14.776 sys 0m0.546s 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:14.776 ************************************ 00:16:14.776 END TEST nvmf_vfio_user_nvme_compliance 00:16:14.776 ************************************ 00:16:14.776 22:46:07 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:14.776 22:46:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:14.776 22:46:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:14.776 22:46:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:14.776 ************************************ 00:16:14.776 START TEST nvmf_vfio_user_fuzz 00:16:14.776 ************************************ 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:14.776 * Looking for test storage... 00:16:14.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3510160 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3510160' 00:16:14.776 Process pid: 3510160 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3510160 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 3510160 ']' 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:14.776 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:15.343 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:15.343 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:16:15.343 22:46:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:16.308 22:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:16.308 22:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.308 22:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:16.308 22:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.308 22:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:16.308 22:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:16.308 22:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.308 22:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:16.308 malloc0 00:16:16.308 22:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.308 22:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:16.308 22:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.308 22:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:16.308 22:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.308 22:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:16.308 22:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.308 22:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:16.308 22:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.308 22:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:16.308 22:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.309 22:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:16.309 22:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.309 22:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:16.309 22:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:48.375 Fuzzing completed. Shutting down the fuzz application 00:16:48.375 00:16:48.375 Dumping successful admin opcodes: 00:16:48.375 8, 9, 10, 24, 00:16:48.375 Dumping successful io opcodes: 00:16:48.375 0, 00:16:48.375 NS: 0x200003a1ef00 I/O qp, Total commands completed: 607516, total successful commands: 2349, random_seed: 720720192 00:16:48.375 NS: 0x200003a1ef00 admin qp, Total commands completed: 87001, total successful commands: 694, random_seed: 501424768 00:16:48.375 22:46:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:48.375 22:46:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.375 22:46:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:48.375 22:46:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.375 22:46:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3510160 00:16:48.375 22:46:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 3510160 ']' 00:16:48.375 22:46:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 3510160 00:16:48.375 22:46:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:16:48.375 22:46:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:48.375 22:46:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3510160 00:16:48.375 22:46:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:48.375 22:46:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:48.375 22:46:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3510160' 00:16:48.375 killing process with pid 3510160 00:16:48.375 22:46:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 3510160 00:16:48.375 22:46:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 3510160 00:16:48.375 22:46:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:48.375 22:46:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:48.375 00:16:48.375 real 0m32.167s 00:16:48.375 user 0m31.391s 00:16:48.375 sys 0m29.828s 00:16:48.375 22:46:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:48.375 22:46:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:48.375 ************************************ 00:16:48.375 END TEST nvmf_vfio_user_fuzz 00:16:48.375 ************************************ 00:16:48.375 22:46:39 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:48.375 22:46:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:48.375 22:46:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:48.375 22:46:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:48.375 ************************************ 00:16:48.375 START TEST nvmf_host_management 00:16:48.375 ************************************ 00:16:48.375 22:46:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:48.375 * Looking for test storage... 00:16:48.375 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:48.375 22:46:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:48.375 22:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:48.375 22:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:48.375 22:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:48.375 22:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:48.375 22:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:48.375 22:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:48.375 22:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:48.375 22:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:48.375 22:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:48.375 22:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:48.375 22:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:48.375 22:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:48.375 22:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:48.375 22:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:48.375 22:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:48.375 22:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:48.375 22:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:48.375 22:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:48.375 22:46:39 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:48.375 22:46:39 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:48.375 22:46:39 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:48.375 22:46:39 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.375 22:46:39 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.376 22:46:39 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.376 22:46:39 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:48.376 22:46:39 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.376 22:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:48.376 22:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:48.376 22:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:48.376 22:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:48.376 22:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:48.376 22:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:48.376 22:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:48.376 22:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:48.376 22:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:48.376 22:46:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:48.376 22:46:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:48.376 22:46:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:48.376 22:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:48.376 22:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:48.376 22:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:48.376 22:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:48.376 22:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:48.376 22:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.376 22:46:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:48.376 22:46:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.376 22:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:48.376 22:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:48.376 22:46:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:48.376 22:46:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:48.941 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:48.941 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:48.941 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:48.941 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:48.941 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:48.941 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:48.941 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:48.941 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:48.942 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:48.942 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:48.942 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:48.942 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:48.942 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:49.200 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:49.200 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:49.200 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:49.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:49.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:16:49.200 00:16:49.200 --- 10.0.0.2 ping statistics --- 00:16:49.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.200 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:16:49.200 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:49.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:49.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:16:49.200 00:16:49.200 --- 10.0.0.1 ping statistics --- 00:16:49.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.201 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:16:49.201 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:49.201 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:49.201 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:49.201 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:49.201 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:49.201 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:49.201 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:49.201 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:49.201 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:49.201 22:46:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:49.201 22:46:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:49.201 22:46:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:49.201 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:49.201 22:46:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:49.201 22:46:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:49.201 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3515571 00:16:49.201 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3515571 00:16:49.201 22:46:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 3515571 ']' 00:16:49.201 22:46:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.201 22:46:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:49.201 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:49.201 22:46:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.201 22:46:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:49.201 22:46:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:49.201 [2024-07-26 22:46:41.551819] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:16:49.201 [2024-07-26 22:46:41.551901] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.201 EAL: No free 2048 kB hugepages reported on node 1 00:16:49.201 [2024-07-26 22:46:41.620912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:49.459 [2024-07-26 22:46:41.713430] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.459 [2024-07-26 22:46:41.713484] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.459 [2024-07-26 22:46:41.713500] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:49.459 [2024-07-26 22:46:41.713514] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:49.459 [2024-07-26 22:46:41.713525] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.459 [2024-07-26 22:46:41.713626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:49.459 [2024-07-26 22:46:41.713714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:49.459 [2024-07-26 22:46:41.713996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:49.459 [2024-07-26 22:46:41.714000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.459 22:46:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:49.459 22:46:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:49.459 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:49.459 22:46:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:49.459 22:46:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:49.459 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.459 22:46:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:49.459 22:46:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.459 22:46:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:49.459 [2024-07-26 22:46:41.868747] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:49.459 22:46:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.459 22:46:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:49.459 22:46:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:49.459 22:46:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:49.459 22:46:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:49.459 22:46:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:49.459 22:46:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:49.459 22:46:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.459 22:46:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:49.459 Malloc0 00:16:49.459 [2024-07-26 22:46:41.928713] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.459 22:46:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.459 22:46:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:49.459 22:46:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:49.459 22:46:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:49.459 22:46:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3515731 00:16:49.459 22:46:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3515731 /var/tmp/bdevperf.sock 00:16:49.459 22:46:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 3515731 ']' 00:16:49.459 22:46:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:49.459 22:46:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:49.459 22:46:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:49.459 22:46:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:49.459 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:49.459 22:46:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:49.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:49.459 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:49.459 22:46:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:49.459 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:49.459 22:46:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:49.459 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:49.459 { 00:16:49.459 "params": { 00:16:49.459 "name": "Nvme$subsystem", 00:16:49.459 "trtype": "$TEST_TRANSPORT", 00:16:49.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:49.459 "adrfam": "ipv4", 00:16:49.459 "trsvcid": "$NVMF_PORT", 00:16:49.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:49.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:49.459 "hdgst": ${hdgst:-false}, 00:16:49.459 "ddgst": ${ddgst:-false} 00:16:49.459 }, 00:16:49.459 "method": "bdev_nvme_attach_controller" 00:16:49.459 } 00:16:49.459 EOF 00:16:49.459 )") 00:16:49.459 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:49.717 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:49.717 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:49.717 22:46:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:49.717 "params": { 00:16:49.717 "name": "Nvme0", 00:16:49.717 "trtype": "tcp", 00:16:49.717 "traddr": "10.0.0.2", 00:16:49.717 "adrfam": "ipv4", 00:16:49.717 "trsvcid": "4420", 00:16:49.717 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:49.717 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:49.717 "hdgst": false, 00:16:49.717 "ddgst": false 00:16:49.717 }, 00:16:49.717 "method": "bdev_nvme_attach_controller" 00:16:49.717 }' 00:16:49.717 [2024-07-26 22:46:41.998474] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:16:49.717 [2024-07-26 22:46:41.998549] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3515731 ] 00:16:49.717 EAL: No free 2048 kB hugepages reported on node 1 00:16:49.717 [2024-07-26 22:46:42.059599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.717 [2024-07-26 22:46:42.145883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.974 Running I/O for 10 seconds... 00:16:49.974 22:46:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:49.974 22:46:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:49.974 22:46:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:49.974 22:46:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.974 22:46:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:49.974 22:46:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.974 22:46:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:49.974 22:46:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:49.974 22:46:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:49.974 22:46:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:49.974 22:46:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:49.974 22:46:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:49.974 22:46:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:49.974 22:46:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:49.974 22:46:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:49.974 22:46:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:49.974 22:46:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.974 22:46:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:49.974 22:46:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.974 22:46:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=64 00:16:49.974 22:46:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 64 -ge 100 ']' 00:16:49.974 22:46:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:16:50.232 22:46:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:16:50.232 22:46:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:50.232 22:46:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:50.232 22:46:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:50.232 22:46:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.232 22:46:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:50.232 22:46:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.232 22:46:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=451 00:16:50.232 22:46:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:16:50.232 22:46:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:50.232 22:46:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:50.232 22:46:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:50.232 22:46:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:50.232 22:46:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.232 22:46:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:50.232 [2024-07-26 22:46:42.707253] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45980 is same with the state(5) to be set 00:16:50.232 [2024-07-26 22:46:42.707317] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45980 is same with the state(5) to be set 00:16:50.232 22:46:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.232 22:46:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:50.232 22:46:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.232 [2024-07-26 22:46:42.712148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:50.232 22:46:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:50.232 [2024-07-26 22:46:42.712189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.232 [2024-07-26 22:46:42.712209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:50.232 [2024-07-26 22:46:42.712224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.232 [2024-07-26 22:46:42.712237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:50.232 [2024-07-26 22:46:42.712251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.232 [2024-07-26 22:46:42.712265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:50.232 [2024-07-26 22:46:42.712278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.232 [2024-07-26 22:46:42.712291] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c85f00 is same with the state(5) to be set 00:16:50.232 [2024-07-26 22:46:42.712344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.232 [2024-07-26 22:46:42.712373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.232 [2024-07-26 22:46:42.712398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.232 [2024-07-26 22:46:42.712413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.232 [2024-07-26 22:46:42.712438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.232 [2024-07-26 22:46:42.712452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.232 [2024-07-26 22:46:42.712467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.232 [2024-07-26 22:46:42.712480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.232 [2024-07-26 22:46:42.712495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.232 [2024-07-26 22:46:42.712509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.232 [2024-07-26 22:46:42.712524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.232 [2024-07-26 22:46:42.712537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.232 [2024-07-26 22:46:42.712553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.232 [2024-07-26 22:46:42.712567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.232 [2024-07-26 22:46:42.712582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.232 [2024-07-26 22:46:42.712602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.232 [2024-07-26 22:46:42.712618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.232 [2024-07-26 22:46:42.712632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.232 [2024-07-26 22:46:42.712648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.232 [2024-07-26 22:46:42.712663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.232 [2024-07-26 22:46:42.712678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.712691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.712707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.712721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.712736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.712750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.712766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.712780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.712796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.712810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.712825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.712840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.712855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.712869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.712884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.712898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.712913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.712927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.712942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.712956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.712974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.712990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.713005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.713019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.713034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.713047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.713070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.713087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.713113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.713127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.713142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.713156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.713172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.713186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.713201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.713215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.713230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.713244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.713259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.713273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.713289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.713303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.713318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.713332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.713348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.713373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.713388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.713403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.713418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.713431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.713447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.713461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.713477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.713491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.713507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.713521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.713536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.713549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.713564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.713577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.713593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.713606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.713622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.713635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.713650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.713664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.713679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.713693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.713723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.713737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.713755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.713769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.713784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.713808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.713824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.713837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.233 [2024-07-26 22:46:42.713852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.233 [2024-07-26 22:46:42.713865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.234 [2024-07-26 22:46:42.713880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.234 [2024-07-26 22:46:42.713893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.234 [2024-07-26 22:46:42.713908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.234 [2024-07-26 22:46:42.713921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.234 [2024-07-26 22:46:42.713936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.234 [2024-07-26 22:46:42.713949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.234 [2024-07-26 22:46:42.713965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.234 [2024-07-26 22:46:42.713978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.234 [2024-07-26 22:46:42.713993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.234 [2024-07-26 22:46:42.714006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.234 [2024-07-26 22:46:42.714021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.234 [2024-07-26 22:46:42.714048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.234 [2024-07-26 22:46:42.714074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.234 [2024-07-26 22:46:42.714090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.234 [2024-07-26 22:46:42.714105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.234 [2024-07-26 22:46:42.714119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.234 [2024-07-26 22:46:42.714134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.234 [2024-07-26 22:46:42.714151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.234 [2024-07-26 22:46:42.714167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.234 [2024-07-26 22:46:42.714181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.234 [2024-07-26 22:46:42.714197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.234 [2024-07-26 22:46:42.714210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.234 [2024-07-26 22:46:42.714225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.234 [2024-07-26 22:46:42.714239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.234 [2024-07-26 22:46:42.714254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.234 [2024-07-26 22:46:42.714268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.234 [2024-07-26 22:46:42.714284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.234 [2024-07-26 22:46:42.714298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.234 [2024-07-26 22:46:42.714313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.234 [2024-07-26 22:46:42.714326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.234 [2024-07-26 22:46:42.714445] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c80330 was disconnected and freed. reset controller. 00:16:50.234 [2024-07-26 22:46:42.715565] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:50.234 task offset: 65536 on job bdev=Nvme0n1 fails 00:16:50.234 00:16:50.234 Latency(us) 00:16:50.234 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.234 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:50.234 Job: Nvme0n1 ended in about 0.39 seconds with error 00:16:50.234 Verification LBA range: start 0x0 length 0x400 00:16:50.234 Nvme0n1 : 0.39 1305.86 81.62 163.23 0.00 42335.46 2900.57 38641.97 00:16:50.234 =================================================================================================================== 00:16:50.234 Total : 1305.86 81.62 163.23 0.00 42335.46 2900.57 38641.97 00:16:50.234 [2024-07-26 22:46:42.717450] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:50.234 [2024-07-26 22:46:42.717478] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c85f00 (9): Bad file descriptor 00:16:50.234 22:46:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.234 22:46:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:50.234 [2024-07-26 22:46:42.725065] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:51.604 22:46:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3515731 00:16:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3515731) - No such process 00:16:51.604 22:46:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:16:51.604 22:46:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:51.604 22:46:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:51.604 22:46:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:51.604 22:46:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:51.604 22:46:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:51.604 22:46:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:51.604 22:46:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:51.604 { 00:16:51.604 "params": { 00:16:51.604 "name": "Nvme$subsystem", 00:16:51.604 "trtype": "$TEST_TRANSPORT", 00:16:51.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:51.604 "adrfam": "ipv4", 00:16:51.604 "trsvcid": "$NVMF_PORT", 00:16:51.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:51.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:51.604 "hdgst": ${hdgst:-false}, 00:16:51.604 "ddgst": ${ddgst:-false} 00:16:51.604 }, 00:16:51.604 "method": "bdev_nvme_attach_controller" 00:16:51.604 } 00:16:51.604 EOF 00:16:51.604 )") 00:16:51.604 22:46:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:51.604 22:46:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:51.604 22:46:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:51.604 22:46:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:51.604 "params": { 00:16:51.604 "name": "Nvme0", 00:16:51.604 "trtype": "tcp", 00:16:51.604 "traddr": "10.0.0.2", 00:16:51.604 "adrfam": "ipv4", 00:16:51.604 "trsvcid": "4420", 00:16:51.604 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:51.604 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:51.604 "hdgst": false, 00:16:51.604 "ddgst": false 00:16:51.604 }, 00:16:51.604 "method": "bdev_nvme_attach_controller" 00:16:51.604 }' 00:16:51.604 [2024-07-26 22:46:43.763422] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:16:51.604 [2024-07-26 22:46:43.763533] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3515891 ] 00:16:51.604 EAL: No free 2048 kB hugepages reported on node 1 00:16:51.604 [2024-07-26 22:46:43.823228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.604 [2024-07-26 22:46:43.912698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.862 Running I/O for 1 seconds... 00:16:52.795 00:16:52.795 Latency(us) 00:16:52.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:52.795 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:52.795 Verification LBA range: start 0x0 length 0x400 00:16:52.795 Nvme0n1 : 1.02 1385.78 86.61 0.00 0.00 45496.08 13010.11 41748.86 00:16:52.795 =================================================================================================================== 00:16:52.795 Total : 1385.78 86.61 0.00 0.00 45496.08 13010.11 41748.86 00:16:53.053 22:46:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:53.053 22:46:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:53.053 22:46:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:53.053 22:46:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:53.053 22:46:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:53.053 22:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:53.053 22:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:53.053 22:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:53.053 22:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:53.053 22:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:53.053 22:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:53.053 rmmod nvme_tcp 00:16:53.053 rmmod nvme_fabrics 00:16:53.053 rmmod nvme_keyring 00:16:53.053 22:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:53.053 22:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:53.311 22:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:53.311 22:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3515571 ']' 00:16:53.311 22:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3515571 00:16:53.311 22:46:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 3515571 ']' 00:16:53.311 22:46:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 3515571 00:16:53.311 22:46:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:16:53.311 22:46:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:53.311 22:46:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3515571 00:16:53.311 22:46:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:53.311 22:46:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:53.311 22:46:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3515571' 00:16:53.311 killing process with pid 3515571 00:16:53.311 22:46:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 3515571 00:16:53.311 22:46:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 3515571 00:16:53.311 [2024-07-26 22:46:45.808088] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:53.569 22:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:53.569 22:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:53.569 22:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:53.569 22:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:53.569 22:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:53.569 22:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.569 22:46:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:53.569 22:46:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:55.470 22:46:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:55.470 22:46:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:55.470 00:16:55.470 real 0m8.487s 00:16:55.470 user 0m19.205s 00:16:55.470 sys 0m2.560s 00:16:55.471 22:46:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:55.471 22:46:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:55.471 ************************************ 00:16:55.471 END TEST nvmf_host_management 00:16:55.471 ************************************ 00:16:55.471 22:46:47 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:55.471 22:46:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:55.471 22:46:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:55.471 22:46:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:55.471 ************************************ 00:16:55.471 START TEST nvmf_lvol 00:16:55.471 ************************************ 00:16:55.471 22:46:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:55.729 * Looking for test storage... 00:16:55.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:55.729 22:46:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:55.729 22:46:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:55.729 22:46:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:55.729 22:46:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:55.729 22:46:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:55.729 22:46:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:55.729 22:46:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:55.729 22:46:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:55.729 22:46:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:55.729 22:46:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:55.729 22:46:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:55.729 22:46:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:55.729 22:46:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:55.729 22:46:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:55.729 22:46:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:55.729 22:46:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:55.729 22:46:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:55.729 22:46:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:55.729 22:46:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:55.729 22:46:47 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:55.729 22:46:47 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:55.730 22:46:47 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:55.730 22:46:47 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.730 22:46:47 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.730 22:46:47 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.730 22:46:47 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:55.730 22:46:47 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.730 22:46:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:55.730 22:46:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:55.730 22:46:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:55.730 22:46:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:55.730 22:46:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:55.730 22:46:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:55.730 22:46:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:55.730 22:46:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:55.730 22:46:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:55.730 22:46:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:55.730 22:46:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:55.730 22:46:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:55.730 22:46:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:55.730 22:46:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:55.730 22:46:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:55.730 22:46:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:55.730 22:46:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:55.730 22:46:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:55.730 22:46:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:55.730 22:46:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:55.730 22:46:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:55.730 22:46:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:55.730 22:46:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:55.730 22:46:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:55.730 22:46:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:55.730 22:46:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:16:55.730 22:46:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:57.632 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:57.632 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:16:57.632 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:57.632 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:57.632 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:57.632 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:57.632 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:57.632 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:16:57.632 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:57.632 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:57.633 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:57.633 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:57.633 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:57.633 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:57.633 22:46:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:57.633 22:46:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:57.633 22:46:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:57.633 22:46:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:57.633 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:57.633 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:16:57.633 00:16:57.633 --- 10.0.0.2 ping statistics --- 00:16:57.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.633 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:16:57.633 22:46:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:57.633 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:57.633 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:16:57.633 00:16:57.633 --- 10.0.0.1 ping statistics --- 00:16:57.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.633 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:16:57.633 22:46:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:57.633 22:46:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:16:57.633 22:46:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:57.633 22:46:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:57.633 22:46:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:57.633 22:46:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:57.633 22:46:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:57.633 22:46:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:57.633 22:46:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:57.633 22:46:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:57.633 22:46:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:57.633 22:46:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:57.633 22:46:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:57.634 22:46:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3518078 00:16:57.634 22:46:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:57.634 22:46:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3518078 00:16:57.634 22:46:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 3518078 ']' 00:16:57.634 22:46:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.634 22:46:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:57.634 22:46:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.634 22:46:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:57.634 22:46:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:57.634 [2024-07-26 22:46:50.113823] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:16:57.634 [2024-07-26 22:46:50.113917] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:57.892 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.892 [2024-07-26 22:46:50.182471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:57.892 [2024-07-26 22:46:50.271457] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.892 [2024-07-26 22:46:50.271510] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.892 [2024-07-26 22:46:50.271539] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:57.892 [2024-07-26 22:46:50.271550] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:57.892 [2024-07-26 22:46:50.271560] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.892 [2024-07-26 22:46:50.271689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.892 [2024-07-26 22:46:50.271961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:57.892 [2024-07-26 22:46:50.271965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.892 22:46:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:57.892 22:46:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:16:57.892 22:46:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:57.892 22:46:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:57.893 22:46:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:58.150 22:46:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:58.150 22:46:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:58.150 [2024-07-26 22:46:50.628674] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:58.150 22:46:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:58.717 22:46:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:58.717 22:46:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:58.717 22:46:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:58.717 22:46:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:58.975 22:46:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:59.271 22:46:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=5e151d0c-fbc5-4f65-8728-233027361d74 00:16:59.272 22:46:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5e151d0c-fbc5-4f65-8728-233027361d74 lvol 20 00:16:59.533 22:46:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=1716ff08-fa15-424b-8687-64fbb015ee6e 00:16:59.533 22:46:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:59.790 22:46:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1716ff08-fa15-424b-8687-64fbb015ee6e 00:17:00.048 22:46:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:00.305 [2024-07-26 22:46:52.665203] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:00.305 22:46:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:00.562 22:46:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3518389 00:17:00.562 22:46:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:00.562 22:46:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:00.562 EAL: No free 2048 kB hugepages reported on node 1 00:17:01.493 22:46:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 1716ff08-fa15-424b-8687-64fbb015ee6e MY_SNAPSHOT 00:17:01.751 22:46:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=28abbe85-30a9-4f78-a65b-0d7f4d0bc5ce 00:17:01.751 22:46:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 1716ff08-fa15-424b-8687-64fbb015ee6e 30 00:17:02.317 22:46:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 28abbe85-30a9-4f78-a65b-0d7f4d0bc5ce MY_CLONE 00:17:02.317 22:46:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=42cb8354-1384-4f90-8887-714bac7a81d1 00:17:02.317 22:46:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 42cb8354-1384-4f90-8887-714bac7a81d1 00:17:02.883 22:46:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3518389 00:17:10.990 Initializing NVMe Controllers 00:17:10.990 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:10.990 Controller IO queue size 128, less than required. 00:17:10.990 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:10.990 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:10.990 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:10.990 Initialization complete. Launching workers. 00:17:10.990 ======================================================== 00:17:10.990 Latency(us) 00:17:10.990 Device Information : IOPS MiB/s Average min max 00:17:10.990 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10325.90 40.34 12406.58 1907.46 62084.17 00:17:10.990 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10321.70 40.32 12409.64 2301.69 67726.64 00:17:10.990 ======================================================== 00:17:10.990 Total : 20647.60 80.65 12408.11 1907.46 67726.64 00:17:10.990 00:17:10.990 22:47:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:11.247 22:47:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1716ff08-fa15-424b-8687-64fbb015ee6e 00:17:11.503 22:47:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5e151d0c-fbc5-4f65-8728-233027361d74 00:17:11.760 22:47:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:11.760 22:47:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:11.760 22:47:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:11.760 22:47:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:11.760 22:47:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:17:11.760 22:47:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:11.760 22:47:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:17:11.760 22:47:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:11.760 22:47:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:11.760 rmmod nvme_tcp 00:17:11.760 rmmod nvme_fabrics 00:17:11.760 rmmod nvme_keyring 00:17:11.760 22:47:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:11.760 22:47:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:17:11.760 22:47:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:17:11.760 22:47:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3518078 ']' 00:17:11.760 22:47:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3518078 00:17:11.760 22:47:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 3518078 ']' 00:17:11.760 22:47:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 3518078 00:17:11.760 22:47:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:17:11.760 22:47:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:11.760 22:47:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3518078 00:17:11.761 22:47:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:11.761 22:47:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:11.761 22:47:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3518078' 00:17:11.761 killing process with pid 3518078 00:17:11.761 22:47:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 3518078 00:17:11.761 22:47:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 3518078 00:17:12.325 22:47:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:12.325 22:47:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:12.325 22:47:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:12.325 22:47:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:12.325 22:47:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:12.325 22:47:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.325 22:47:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:12.325 22:47:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.226 22:47:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:14.226 00:17:14.226 real 0m18.654s 00:17:14.226 user 1m3.666s 00:17:14.226 sys 0m5.615s 00:17:14.226 22:47:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:14.226 22:47:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:14.226 ************************************ 00:17:14.226 END TEST nvmf_lvol 00:17:14.226 ************************************ 00:17:14.226 22:47:06 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:14.226 22:47:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:14.226 22:47:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:14.226 22:47:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:14.226 ************************************ 00:17:14.226 START TEST nvmf_lvs_grow 00:17:14.226 ************************************ 00:17:14.226 22:47:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:14.226 * Looking for test storage... 00:17:14.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:14.226 22:47:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:14.226 22:47:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:17:14.226 22:47:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:14.226 22:47:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:14.226 22:47:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:17:14.227 22:47:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:16.130 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:16.130 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:17:16.130 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:16.130 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:16.130 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:16.130 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:16.130 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:16.130 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:17:16.130 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:16.130 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:17:16.130 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:17:16.130 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:17:16.130 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:17:16.130 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:17:16.130 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:16.131 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:16.131 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:16.131 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:16.131 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:16.131 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:16.389 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:16.389 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:16.389 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:16.389 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:16.389 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:16.389 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:16.389 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:16.389 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:16.389 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:17:16.389 00:17:16.389 --- 10.0.0.2 ping statistics --- 00:17:16.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.390 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:17:16.390 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:16.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:16.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:17:16.390 00:17:16.390 --- 10.0.0.1 ping statistics --- 00:17:16.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.390 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:17:16.390 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:16.390 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:17:16.390 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:16.390 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:16.390 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:16.390 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:16.390 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:16.390 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:16.390 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:16.390 22:47:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:17:16.390 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:16.390 22:47:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:16.390 22:47:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:16.390 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3521646 00:17:16.390 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3521646 00:17:16.390 22:47:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 3521646 ']' 00:17:16.390 22:47:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.390 22:47:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:16.390 22:47:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.390 22:47:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:16.390 22:47:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:16.390 22:47:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:16.390 [2024-07-26 22:47:08.825192] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:16.390 [2024-07-26 22:47:08.825266] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.390 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.648 [2024-07-26 22:47:08.893168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.648 [2024-07-26 22:47:08.981842] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:16.648 [2024-07-26 22:47:08.981907] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:16.648 [2024-07-26 22:47:08.981923] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:16.648 [2024-07-26 22:47:08.981937] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:16.648 [2024-07-26 22:47:08.981949] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:16.648 [2024-07-26 22:47:08.981978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.648 22:47:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:16.648 22:47:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:17:16.648 22:47:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:16.648 22:47:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:16.648 22:47:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:16.648 22:47:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.648 22:47:09 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:16.906 [2024-07-26 22:47:09.396357] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:17.163 22:47:09 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:17:17.164 22:47:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:17.164 22:47:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:17.164 22:47:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:17.164 ************************************ 00:17:17.164 START TEST lvs_grow_clean 00:17:17.164 ************************************ 00:17:17.164 22:47:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:17:17.164 22:47:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:17.164 22:47:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:17.164 22:47:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:17.164 22:47:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:17.164 22:47:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:17.164 22:47:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:17.164 22:47:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:17.164 22:47:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:17.164 22:47:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:17.422 22:47:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:17.422 22:47:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:17.679 22:47:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=ff98cac4-918c-40d9-8e7c-081a1941b5b3 00:17:17.679 22:47:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff98cac4-918c-40d9-8e7c-081a1941b5b3 00:17:17.679 22:47:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:17.937 22:47:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:17.937 22:47:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:17.937 22:47:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ff98cac4-918c-40d9-8e7c-081a1941b5b3 lvol 150 00:17:18.195 22:47:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=16c92dc8-6505-4dfe-b4fd-90f920534e8d 00:17:18.195 22:47:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:18.195 22:47:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:18.453 [2024-07-26 22:47:10.835430] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:18.453 [2024-07-26 22:47:10.835529] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:18.453 true 00:17:18.453 22:47:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff98cac4-918c-40d9-8e7c-081a1941b5b3 00:17:18.453 22:47:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:18.710 22:47:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:18.710 22:47:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:18.968 22:47:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 16c92dc8-6505-4dfe-b4fd-90f920534e8d 00:17:19.225 22:47:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:19.483 [2024-07-26 22:47:11.834552] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.483 22:47:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:19.756 22:47:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3522080 00:17:19.756 22:47:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:19.756 22:47:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:19.756 22:47:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3522080 /var/tmp/bdevperf.sock 00:17:19.756 22:47:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 3522080 ']' 00:17:19.756 22:47:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:19.756 22:47:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:19.756 22:47:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:19.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:19.756 22:47:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:19.756 22:47:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:19.756 [2024-07-26 22:47:12.138158] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:19.756 [2024-07-26 22:47:12.138249] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3522080 ] 00:17:19.756 EAL: No free 2048 kB hugepages reported on node 1 00:17:19.756 [2024-07-26 22:47:12.199526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.032 [2024-07-26 22:47:12.291732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.032 22:47:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:20.032 22:47:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:17:20.032 22:47:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:20.289 Nvme0n1 00:17:20.289 22:47:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:20.547 [ 00:17:20.547 { 00:17:20.547 "name": "Nvme0n1", 00:17:20.547 "aliases": [ 00:17:20.547 "16c92dc8-6505-4dfe-b4fd-90f920534e8d" 00:17:20.547 ], 00:17:20.547 "product_name": "NVMe disk", 00:17:20.547 "block_size": 4096, 00:17:20.547 "num_blocks": 38912, 00:17:20.547 "uuid": "16c92dc8-6505-4dfe-b4fd-90f920534e8d", 00:17:20.547 "assigned_rate_limits": { 00:17:20.547 "rw_ios_per_sec": 0, 00:17:20.547 "rw_mbytes_per_sec": 0, 00:17:20.547 "r_mbytes_per_sec": 0, 00:17:20.547 "w_mbytes_per_sec": 0 00:17:20.547 }, 00:17:20.547 "claimed": false, 00:17:20.547 "zoned": false, 00:17:20.547 "supported_io_types": { 00:17:20.547 "read": true, 00:17:20.547 "write": true, 00:17:20.547 "unmap": true, 00:17:20.547 "write_zeroes": true, 00:17:20.547 "flush": true, 00:17:20.547 "reset": true, 00:17:20.547 "compare": true, 00:17:20.547 "compare_and_write": true, 00:17:20.547 "abort": true, 00:17:20.547 "nvme_admin": true, 00:17:20.547 "nvme_io": true 00:17:20.547 }, 00:17:20.547 "memory_domains": [ 00:17:20.547 { 00:17:20.547 "dma_device_id": "system", 00:17:20.547 "dma_device_type": 1 00:17:20.547 } 00:17:20.547 ], 00:17:20.547 "driver_specific": { 00:17:20.547 "nvme": [ 00:17:20.547 { 00:17:20.547 "trid": { 00:17:20.547 "trtype": "TCP", 00:17:20.547 "adrfam": "IPv4", 00:17:20.547 "traddr": "10.0.0.2", 00:17:20.547 "trsvcid": "4420", 00:17:20.547 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:20.547 }, 00:17:20.547 "ctrlr_data": { 00:17:20.547 "cntlid": 1, 00:17:20.547 "vendor_id": "0x8086", 00:17:20.547 "model_number": "SPDK bdev Controller", 00:17:20.547 "serial_number": "SPDK0", 00:17:20.547 "firmware_revision": "24.05.1", 00:17:20.547 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:20.547 "oacs": { 00:17:20.547 "security": 0, 00:17:20.547 "format": 0, 00:17:20.547 "firmware": 0, 00:17:20.547 "ns_manage": 0 00:17:20.547 }, 00:17:20.547 "multi_ctrlr": true, 00:17:20.547 "ana_reporting": false 00:17:20.547 }, 00:17:20.547 "vs": { 00:17:20.547 "nvme_version": "1.3" 00:17:20.547 }, 00:17:20.547 "ns_data": { 00:17:20.547 "id": 1, 00:17:20.547 "can_share": true 00:17:20.547 } 00:17:20.547 } 00:17:20.547 ], 00:17:20.547 "mp_policy": "active_passive" 00:17:20.547 } 00:17:20.547 } 00:17:20.547 ] 00:17:20.547 22:47:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3522215 00:17:20.547 22:47:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:20.547 22:47:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:20.804 Running I/O for 10 seconds... 00:17:21.738 Latency(us) 00:17:21.738 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.738 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:21.738 Nvme0n1 : 1.00 13403.00 52.36 0.00 0.00 0.00 0.00 0.00 00:17:21.738 =================================================================================================================== 00:17:21.738 Total : 13403.00 52.36 0.00 0.00 0.00 0.00 0.00 00:17:21.738 00:17:22.671 22:47:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ff98cac4-918c-40d9-8e7c-081a1941b5b3 00:17:22.671 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:22.671 Nvme0n1 : 2.00 13577.50 53.04 0.00 0.00 0.00 0.00 0.00 00:17:22.671 =================================================================================================================== 00:17:22.671 Total : 13577.50 53.04 0.00 0.00 0.00 0.00 0.00 00:17:22.671 00:17:22.929 true 00:17:22.929 22:47:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff98cac4-918c-40d9-8e7c-081a1941b5b3 00:17:22.929 22:47:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:23.187 22:47:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:23.187 22:47:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:23.187 22:47:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3522215 00:17:23.753 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:23.753 Nvme0n1 : 3.00 13670.33 53.40 0.00 0.00 0.00 0.00 0.00 00:17:23.753 =================================================================================================================== 00:17:23.753 Total : 13670.33 53.40 0.00 0.00 0.00 0.00 0.00 00:17:23.753 00:17:24.687 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:24.687 Nvme0n1 : 4.00 13698.75 53.51 0.00 0.00 0.00 0.00 0.00 00:17:24.687 =================================================================================================================== 00:17:24.687 Total : 13698.75 53.51 0.00 0.00 0.00 0.00 0.00 00:17:24.687 00:17:26.061 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:26.061 Nvme0n1 : 5.00 13778.20 53.82 0.00 0.00 0.00 0.00 0.00 00:17:26.061 =================================================================================================================== 00:17:26.061 Total : 13778.20 53.82 0.00 0.00 0.00 0.00 0.00 00:17:26.061 00:17:26.994 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:26.994 Nvme0n1 : 6.00 13791.17 53.87 0.00 0.00 0.00 0.00 0.00 00:17:26.994 =================================================================================================================== 00:17:26.994 Total : 13791.17 53.87 0.00 0.00 0.00 0.00 0.00 00:17:26.994 00:17:27.928 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:27.928 Nvme0n1 : 7.00 13802.71 53.92 0.00 0.00 0.00 0.00 0.00 00:17:27.928 =================================================================================================================== 00:17:27.928 Total : 13802.71 53.92 0.00 0.00 0.00 0.00 0.00 00:17:27.928 00:17:28.862 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:28.862 Nvme0n1 : 8.00 13819.38 53.98 0.00 0.00 0.00 0.00 0.00 00:17:28.862 =================================================================================================================== 00:17:28.862 Total : 13819.38 53.98 0.00 0.00 0.00 0.00 0.00 00:17:28.862 00:17:29.797 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:29.797 Nvme0n1 : 9.00 13833.22 54.04 0.00 0.00 0.00 0.00 0.00 00:17:29.797 =================================================================================================================== 00:17:29.797 Total : 13833.22 54.04 0.00 0.00 0.00 0.00 0.00 00:17:29.797 00:17:30.732 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:30.732 Nvme0n1 : 10.00 13833.10 54.04 0.00 0.00 0.00 0.00 0.00 00:17:30.732 =================================================================================================================== 00:17:30.732 Total : 13833.10 54.04 0.00 0.00 0.00 0.00 0.00 00:17:30.732 00:17:30.732 00:17:30.732 Latency(us) 00:17:30.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.732 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:30.732 Nvme0n1 : 10.01 13833.59 54.04 0.00 0.00 9244.79 2645.71 12379.02 00:17:30.732 =================================================================================================================== 00:17:30.732 Total : 13833.59 54.04 0.00 0.00 9244.79 2645.71 12379.02 00:17:30.732 0 00:17:30.732 22:47:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3522080 00:17:30.732 22:47:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 3522080 ']' 00:17:30.732 22:47:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 3522080 00:17:30.732 22:47:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:17:30.732 22:47:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:30.732 22:47:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3522080 00:17:30.732 22:47:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:30.732 22:47:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:30.732 22:47:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3522080' 00:17:30.732 killing process with pid 3522080 00:17:30.732 22:47:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 3522080 00:17:30.732 Received shutdown signal, test time was about 10.000000 seconds 00:17:30.732 00:17:30.732 Latency(us) 00:17:30.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.732 =================================================================================================================== 00:17:30.732 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:30.732 22:47:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 3522080 00:17:30.990 22:47:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:31.246 22:47:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:31.504 22:47:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff98cac4-918c-40d9-8e7c-081a1941b5b3 00:17:31.505 22:47:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:31.762 22:47:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:31.762 22:47:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:31.762 22:47:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:32.021 [2024-07-26 22:47:24.510858] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:32.279 22:47:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff98cac4-918c-40d9-8e7c-081a1941b5b3 00:17:32.279 22:47:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:17:32.280 22:47:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff98cac4-918c-40d9-8e7c-081a1941b5b3 00:17:32.280 22:47:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:32.280 22:47:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:32.280 22:47:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:32.280 22:47:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:32.280 22:47:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:32.280 22:47:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:32.280 22:47:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:32.280 22:47:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:32.280 22:47:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff98cac4-918c-40d9-8e7c-081a1941b5b3 00:17:32.537 request: 00:17:32.537 { 00:17:32.537 "uuid": "ff98cac4-918c-40d9-8e7c-081a1941b5b3", 00:17:32.537 "method": "bdev_lvol_get_lvstores", 00:17:32.537 "req_id": 1 00:17:32.537 } 00:17:32.537 Got JSON-RPC error response 00:17:32.537 response: 00:17:32.537 { 00:17:32.537 "code": -19, 00:17:32.537 "message": "No such device" 00:17:32.537 } 00:17:32.537 22:47:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:17:32.537 22:47:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:32.537 22:47:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:32.537 22:47:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:32.537 22:47:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:32.796 aio_bdev 00:17:32.796 22:47:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 16c92dc8-6505-4dfe-b4fd-90f920534e8d 00:17:32.796 22:47:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=16c92dc8-6505-4dfe-b4fd-90f920534e8d 00:17:32.796 22:47:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:32.796 22:47:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:17:32.796 22:47:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:32.796 22:47:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:32.796 22:47:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:33.054 22:47:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 16c92dc8-6505-4dfe-b4fd-90f920534e8d -t 2000 00:17:33.312 [ 00:17:33.312 { 00:17:33.312 "name": "16c92dc8-6505-4dfe-b4fd-90f920534e8d", 00:17:33.312 "aliases": [ 00:17:33.312 "lvs/lvol" 00:17:33.312 ], 00:17:33.312 "product_name": "Logical Volume", 00:17:33.312 "block_size": 4096, 00:17:33.312 "num_blocks": 38912, 00:17:33.312 "uuid": "16c92dc8-6505-4dfe-b4fd-90f920534e8d", 00:17:33.312 "assigned_rate_limits": { 00:17:33.312 "rw_ios_per_sec": 0, 00:17:33.312 "rw_mbytes_per_sec": 0, 00:17:33.312 "r_mbytes_per_sec": 0, 00:17:33.312 "w_mbytes_per_sec": 0 00:17:33.312 }, 00:17:33.312 "claimed": false, 00:17:33.312 "zoned": false, 00:17:33.312 "supported_io_types": { 00:17:33.312 "read": true, 00:17:33.312 "write": true, 00:17:33.312 "unmap": true, 00:17:33.312 "write_zeroes": true, 00:17:33.312 "flush": false, 00:17:33.312 "reset": true, 00:17:33.312 "compare": false, 00:17:33.312 "compare_and_write": false, 00:17:33.312 "abort": false, 00:17:33.312 "nvme_admin": false, 00:17:33.312 "nvme_io": false 00:17:33.312 }, 00:17:33.312 "driver_specific": { 00:17:33.312 "lvol": { 00:17:33.312 "lvol_store_uuid": "ff98cac4-918c-40d9-8e7c-081a1941b5b3", 00:17:33.312 "base_bdev": "aio_bdev", 00:17:33.312 "thin_provision": false, 00:17:33.312 "num_allocated_clusters": 38, 00:17:33.312 "snapshot": false, 00:17:33.312 "clone": false, 00:17:33.312 "esnap_clone": false 00:17:33.312 } 00:17:33.312 } 00:17:33.312 } 00:17:33.312 ] 00:17:33.312 22:47:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:17:33.312 22:47:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff98cac4-918c-40d9-8e7c-081a1941b5b3 00:17:33.312 22:47:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:33.569 22:47:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:33.569 22:47:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff98cac4-918c-40d9-8e7c-081a1941b5b3 00:17:33.569 22:47:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:33.828 22:47:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:33.828 22:47:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 16c92dc8-6505-4dfe-b4fd-90f920534e8d 00:17:34.085 22:47:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ff98cac4-918c-40d9-8e7c-081a1941b5b3 00:17:34.381 22:47:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:34.639 22:47:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:34.639 00:17:34.639 real 0m17.510s 00:17:34.639 user 0m16.661s 00:17:34.639 sys 0m2.109s 00:17:34.639 22:47:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:34.639 22:47:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:34.639 ************************************ 00:17:34.639 END TEST lvs_grow_clean 00:17:34.639 ************************************ 00:17:34.639 22:47:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:34.639 22:47:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:34.639 22:47:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:34.639 22:47:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:34.639 ************************************ 00:17:34.639 START TEST lvs_grow_dirty 00:17:34.639 ************************************ 00:17:34.639 22:47:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:17:34.639 22:47:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:34.639 22:47:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:34.639 22:47:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:34.639 22:47:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:34.639 22:47:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:34.639 22:47:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:34.639 22:47:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:34.639 22:47:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:34.639 22:47:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:34.897 22:47:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:34.897 22:47:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:35.155 22:47:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=c260786f-aa66-43d1-b6d3-76ea477be6c6 00:17:35.155 22:47:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c260786f-aa66-43d1-b6d3-76ea477be6c6 00:17:35.155 22:47:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:35.412 22:47:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:35.412 22:47:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:35.412 22:47:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c260786f-aa66-43d1-b6d3-76ea477be6c6 lvol 150 00:17:35.670 22:47:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=ca75483d-08a1-46ee-822f-f0a5241516f5 00:17:35.670 22:47:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:35.670 22:47:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:35.928 [2024-07-26 22:47:28.248241] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:35.928 [2024-07-26 22:47:28.248347] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:35.928 true 00:17:35.928 22:47:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c260786f-aa66-43d1-b6d3-76ea477be6c6 00:17:35.928 22:47:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:36.185 22:47:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:36.185 22:47:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:36.443 22:47:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ca75483d-08a1-46ee-822f-f0a5241516f5 00:17:36.700 22:47:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:36.958 [2024-07-26 22:47:29.371675] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:36.958 22:47:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:37.215 22:47:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3524263 00:17:37.215 22:47:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:37.215 22:47:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:37.215 22:47:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3524263 /var/tmp/bdevperf.sock 00:17:37.215 22:47:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 3524263 ']' 00:17:37.215 22:47:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:37.215 22:47:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:37.216 22:47:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:37.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:37.216 22:47:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:37.216 22:47:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:37.216 [2024-07-26 22:47:29.717276] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:37.216 [2024-07-26 22:47:29.717369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3524263 ] 00:17:37.473 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.473 [2024-07-26 22:47:29.778444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.473 [2024-07-26 22:47:29.870694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.731 22:47:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:37.731 22:47:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:37.731 22:47:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:37.988 Nvme0n1 00:17:37.988 22:47:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:38.246 [ 00:17:38.246 { 00:17:38.246 "name": "Nvme0n1", 00:17:38.246 "aliases": [ 00:17:38.246 "ca75483d-08a1-46ee-822f-f0a5241516f5" 00:17:38.246 ], 00:17:38.246 "product_name": "NVMe disk", 00:17:38.246 "block_size": 4096, 00:17:38.246 "num_blocks": 38912, 00:17:38.246 "uuid": "ca75483d-08a1-46ee-822f-f0a5241516f5", 00:17:38.246 "assigned_rate_limits": { 00:17:38.246 "rw_ios_per_sec": 0, 00:17:38.246 "rw_mbytes_per_sec": 0, 00:17:38.246 "r_mbytes_per_sec": 0, 00:17:38.246 "w_mbytes_per_sec": 0 00:17:38.246 }, 00:17:38.246 "claimed": false, 00:17:38.246 "zoned": false, 00:17:38.247 "supported_io_types": { 00:17:38.247 "read": true, 00:17:38.247 "write": true, 00:17:38.247 "unmap": true, 00:17:38.247 "write_zeroes": true, 00:17:38.247 "flush": true, 00:17:38.247 "reset": true, 00:17:38.247 "compare": true, 00:17:38.247 "compare_and_write": true, 00:17:38.247 "abort": true, 00:17:38.247 "nvme_admin": true, 00:17:38.247 "nvme_io": true 00:17:38.247 }, 00:17:38.247 "memory_domains": [ 00:17:38.247 { 00:17:38.247 "dma_device_id": "system", 00:17:38.247 "dma_device_type": 1 00:17:38.247 } 00:17:38.247 ], 00:17:38.247 "driver_specific": { 00:17:38.247 "nvme": [ 00:17:38.247 { 00:17:38.247 "trid": { 00:17:38.247 "trtype": "TCP", 00:17:38.247 "adrfam": "IPv4", 00:17:38.247 "traddr": "10.0.0.2", 00:17:38.247 "trsvcid": "4420", 00:17:38.247 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:38.247 }, 00:17:38.247 "ctrlr_data": { 00:17:38.247 "cntlid": 1, 00:17:38.247 "vendor_id": "0x8086", 00:17:38.247 "model_number": "SPDK bdev Controller", 00:17:38.247 "serial_number": "SPDK0", 00:17:38.247 "firmware_revision": "24.05.1", 00:17:38.247 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:38.247 "oacs": { 00:17:38.247 "security": 0, 00:17:38.247 "format": 0, 00:17:38.247 "firmware": 0, 00:17:38.247 "ns_manage": 0 00:17:38.247 }, 00:17:38.247 "multi_ctrlr": true, 00:17:38.247 "ana_reporting": false 00:17:38.247 }, 00:17:38.247 "vs": { 00:17:38.247 "nvme_version": "1.3" 00:17:38.247 }, 00:17:38.247 "ns_data": { 00:17:38.247 "id": 1, 00:17:38.247 "can_share": true 00:17:38.247 } 00:17:38.247 } 00:17:38.247 ], 00:17:38.247 "mp_policy": "active_passive" 00:17:38.247 } 00:17:38.247 } 00:17:38.247 ] 00:17:38.247 22:47:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3524371 00:17:38.247 22:47:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:38.247 22:47:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:38.247 Running I/O for 10 seconds... 00:17:39.622 Latency(us) 00:17:39.622 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.622 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:39.622 Nvme0n1 : 1.00 13363.00 52.20 0.00 0.00 0.00 0.00 0.00 00:17:39.622 =================================================================================================================== 00:17:39.622 Total : 13363.00 52.20 0.00 0.00 0.00 0.00 0.00 00:17:39.622 00:17:40.188 22:47:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c260786f-aa66-43d1-b6d3-76ea477be6c6 00:17:40.446 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:40.446 Nvme0n1 : 2.00 13461.50 52.58 0.00 0.00 0.00 0.00 0.00 00:17:40.446 =================================================================================================================== 00:17:40.446 Total : 13461.50 52.58 0.00 0.00 0.00 0.00 0.00 00:17:40.446 00:17:40.446 true 00:17:40.446 22:47:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c260786f-aa66-43d1-b6d3-76ea477be6c6 00:17:40.446 22:47:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:40.705 22:47:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:40.705 22:47:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:40.705 22:47:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3524371 00:17:41.270 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:41.271 Nvme0n1 : 3.00 13582.33 53.06 0.00 0.00 0.00 0.00 0.00 00:17:41.271 =================================================================================================================== 00:17:41.271 Total : 13582.33 53.06 0.00 0.00 0.00 0.00 0.00 00:17:41.271 00:17:42.643 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:42.643 Nvme0n1 : 4.00 13638.75 53.28 0.00 0.00 0.00 0.00 0.00 00:17:42.643 =================================================================================================================== 00:17:42.643 Total : 13638.75 53.28 0.00 0.00 0.00 0.00 0.00 00:17:42.643 00:17:43.575 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:43.575 Nvme0n1 : 5.00 13669.40 53.40 0.00 0.00 0.00 0.00 0.00 00:17:43.575 =================================================================================================================== 00:17:43.575 Total : 13669.40 53.40 0.00 0.00 0.00 0.00 0.00 00:17:43.575 00:17:44.509 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:44.509 Nvme0n1 : 6.00 13691.17 53.48 0.00 0.00 0.00 0.00 0.00 00:17:44.509 =================================================================================================================== 00:17:44.509 Total : 13691.17 53.48 0.00 0.00 0.00 0.00 0.00 00:17:44.509 00:17:45.443 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:45.443 Nvme0n1 : 7.00 13707.86 53.55 0.00 0.00 0.00 0.00 0.00 00:17:45.443 =================================================================================================================== 00:17:45.443 Total : 13707.86 53.55 0.00 0.00 0.00 0.00 0.00 00:17:45.443 00:17:46.377 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:46.377 Nvme0n1 : 8.00 13723.38 53.61 0.00 0.00 0.00 0.00 0.00 00:17:46.378 =================================================================================================================== 00:17:46.378 Total : 13723.38 53.61 0.00 0.00 0.00 0.00 0.00 00:17:46.378 00:17:47.311 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:47.311 Nvme0n1 : 9.00 13744.33 53.69 0.00 0.00 0.00 0.00 0.00 00:17:47.311 =================================================================================================================== 00:17:47.311 Total : 13744.33 53.69 0.00 0.00 0.00 0.00 0.00 00:17:47.311 00:17:48.245 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:48.245 Nvme0n1 : 10.00 13731.50 53.64 0.00 0.00 0.00 0.00 0.00 00:17:48.245 =================================================================================================================== 00:17:48.245 Total : 13731.50 53.64 0.00 0.00 0.00 0.00 0.00 00:17:48.245 00:17:48.245 00:17:48.245 Latency(us) 00:17:48.245 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.245 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:48.245 Nvme0n1 : 10.01 13731.35 53.64 0.00 0.00 9311.93 2839.89 13689.74 00:17:48.245 =================================================================================================================== 00:17:48.245 Total : 13731.35 53.64 0.00 0.00 9311.93 2839.89 13689.74 00:17:48.245 0 00:17:48.245 22:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3524263 00:17:48.245 22:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 3524263 ']' 00:17:48.245 22:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 3524263 00:17:48.245 22:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:17:48.503 22:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:48.503 22:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3524263 00:17:48.503 22:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:48.503 22:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:48.503 22:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3524263' 00:17:48.503 killing process with pid 3524263 00:17:48.503 22:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 3524263 00:17:48.503 Received shutdown signal, test time was about 10.000000 seconds 00:17:48.503 00:17:48.503 Latency(us) 00:17:48.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.503 =================================================================================================================== 00:17:48.503 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:48.503 22:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 3524263 00:17:48.503 22:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:48.761 22:47:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:49.328 22:47:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c260786f-aa66-43d1-b6d3-76ea477be6c6 00:17:49.328 22:47:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:49.328 22:47:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:49.328 22:47:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:49.328 22:47:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3521646 00:17:49.328 22:47:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3521646 00:17:49.587 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3521646 Killed "${NVMF_APP[@]}" "$@" 00:17:49.587 22:47:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:49.587 22:47:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:49.587 22:47:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:49.587 22:47:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:49.587 22:47:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:49.587 22:47:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3525607 00:17:49.587 22:47:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:49.587 22:47:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3525607 00:17:49.587 22:47:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 3525607 ']' 00:17:49.587 22:47:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.587 22:47:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:49.587 22:47:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.587 22:47:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:49.587 22:47:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:49.587 [2024-07-26 22:47:41.887653] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:49.587 [2024-07-26 22:47:41.887741] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:49.587 EAL: No free 2048 kB hugepages reported on node 1 00:17:49.587 [2024-07-26 22:47:41.953951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.587 [2024-07-26 22:47:42.038996] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:49.587 [2024-07-26 22:47:42.039068] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:49.587 [2024-07-26 22:47:42.039099] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:49.587 [2024-07-26 22:47:42.039122] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:49.587 [2024-07-26 22:47:42.039132] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:49.587 [2024-07-26 22:47:42.039157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.846 22:47:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:49.846 22:47:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:49.846 22:47:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:49.846 22:47:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:49.846 22:47:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:49.846 22:47:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:49.846 22:47:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:50.130 [2024-07-26 22:47:42.391513] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:50.130 [2024-07-26 22:47:42.391661] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:50.130 [2024-07-26 22:47:42.391719] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:50.130 22:47:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:17:50.130 22:47:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev ca75483d-08a1-46ee-822f-f0a5241516f5 00:17:50.130 22:47:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=ca75483d-08a1-46ee-822f-f0a5241516f5 00:17:50.130 22:47:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:50.130 22:47:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:50.130 22:47:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:50.131 22:47:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:50.131 22:47:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:50.389 22:47:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ca75483d-08a1-46ee-822f-f0a5241516f5 -t 2000 00:17:50.648 [ 00:17:50.648 { 00:17:50.648 "name": "ca75483d-08a1-46ee-822f-f0a5241516f5", 00:17:50.648 "aliases": [ 00:17:50.648 "lvs/lvol" 00:17:50.648 ], 00:17:50.648 "product_name": "Logical Volume", 00:17:50.648 "block_size": 4096, 00:17:50.648 "num_blocks": 38912, 00:17:50.648 "uuid": "ca75483d-08a1-46ee-822f-f0a5241516f5", 00:17:50.648 "assigned_rate_limits": { 00:17:50.648 "rw_ios_per_sec": 0, 00:17:50.648 "rw_mbytes_per_sec": 0, 00:17:50.648 "r_mbytes_per_sec": 0, 00:17:50.648 "w_mbytes_per_sec": 0 00:17:50.648 }, 00:17:50.648 "claimed": false, 00:17:50.648 "zoned": false, 00:17:50.648 "supported_io_types": { 00:17:50.648 "read": true, 00:17:50.648 "write": true, 00:17:50.648 "unmap": true, 00:17:50.648 "write_zeroes": true, 00:17:50.648 "flush": false, 00:17:50.648 "reset": true, 00:17:50.648 "compare": false, 00:17:50.648 "compare_and_write": false, 00:17:50.648 "abort": false, 00:17:50.648 "nvme_admin": false, 00:17:50.649 "nvme_io": false 00:17:50.649 }, 00:17:50.649 "driver_specific": { 00:17:50.649 "lvol": { 00:17:50.649 "lvol_store_uuid": "c260786f-aa66-43d1-b6d3-76ea477be6c6", 00:17:50.649 "base_bdev": "aio_bdev", 00:17:50.649 "thin_provision": false, 00:17:50.649 "num_allocated_clusters": 38, 00:17:50.649 "snapshot": false, 00:17:50.649 "clone": false, 00:17:50.649 "esnap_clone": false 00:17:50.649 } 00:17:50.649 } 00:17:50.649 } 00:17:50.649 ] 00:17:50.649 22:47:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:50.649 22:47:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c260786f-aa66-43d1-b6d3-76ea477be6c6 00:17:50.649 22:47:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:50.908 22:47:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:50.908 22:47:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c260786f-aa66-43d1-b6d3-76ea477be6c6 00:17:50.908 22:47:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:50.908 22:47:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:50.908 22:47:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:51.168 [2024-07-26 22:47:43.632475] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:51.168 22:47:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c260786f-aa66-43d1-b6d3-76ea477be6c6 00:17:51.168 22:47:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:17:51.168 22:47:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c260786f-aa66-43d1-b6d3-76ea477be6c6 00:17:51.168 22:47:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:51.168 22:47:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:51.168 22:47:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:51.168 22:47:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:51.168 22:47:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:51.168 22:47:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:51.168 22:47:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:51.168 22:47:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:51.168 22:47:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c260786f-aa66-43d1-b6d3-76ea477be6c6 00:17:51.733 request: 00:17:51.733 { 00:17:51.733 "uuid": "c260786f-aa66-43d1-b6d3-76ea477be6c6", 00:17:51.733 "method": "bdev_lvol_get_lvstores", 00:17:51.733 "req_id": 1 00:17:51.733 } 00:17:51.733 Got JSON-RPC error response 00:17:51.733 response: 00:17:51.733 { 00:17:51.733 "code": -19, 00:17:51.733 "message": "No such device" 00:17:51.733 } 00:17:51.733 22:47:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:17:51.733 22:47:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:51.733 22:47:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:51.733 22:47:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:51.733 22:47:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:51.733 aio_bdev 00:17:51.733 22:47:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ca75483d-08a1-46ee-822f-f0a5241516f5 00:17:51.733 22:47:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=ca75483d-08a1-46ee-822f-f0a5241516f5 00:17:51.733 22:47:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:51.733 22:47:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:51.733 22:47:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:51.733 22:47:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:51.733 22:47:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:51.991 22:47:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ca75483d-08a1-46ee-822f-f0a5241516f5 -t 2000 00:17:52.249 [ 00:17:52.249 { 00:17:52.249 "name": "ca75483d-08a1-46ee-822f-f0a5241516f5", 00:17:52.249 "aliases": [ 00:17:52.249 "lvs/lvol" 00:17:52.249 ], 00:17:52.249 "product_name": "Logical Volume", 00:17:52.249 "block_size": 4096, 00:17:52.249 "num_blocks": 38912, 00:17:52.249 "uuid": "ca75483d-08a1-46ee-822f-f0a5241516f5", 00:17:52.249 "assigned_rate_limits": { 00:17:52.249 "rw_ios_per_sec": 0, 00:17:52.249 "rw_mbytes_per_sec": 0, 00:17:52.249 "r_mbytes_per_sec": 0, 00:17:52.249 "w_mbytes_per_sec": 0 00:17:52.249 }, 00:17:52.249 "claimed": false, 00:17:52.249 "zoned": false, 00:17:52.249 "supported_io_types": { 00:17:52.249 "read": true, 00:17:52.249 "write": true, 00:17:52.249 "unmap": true, 00:17:52.249 "write_zeroes": true, 00:17:52.249 "flush": false, 00:17:52.249 "reset": true, 00:17:52.249 "compare": false, 00:17:52.249 "compare_and_write": false, 00:17:52.249 "abort": false, 00:17:52.249 "nvme_admin": false, 00:17:52.249 "nvme_io": false 00:17:52.249 }, 00:17:52.249 "driver_specific": { 00:17:52.249 "lvol": { 00:17:52.249 "lvol_store_uuid": "c260786f-aa66-43d1-b6d3-76ea477be6c6", 00:17:52.249 "base_bdev": "aio_bdev", 00:17:52.249 "thin_provision": false, 00:17:52.249 "num_allocated_clusters": 38, 00:17:52.249 "snapshot": false, 00:17:52.249 "clone": false, 00:17:52.249 "esnap_clone": false 00:17:52.249 } 00:17:52.249 } 00:17:52.249 } 00:17:52.249 ] 00:17:52.249 22:47:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:52.249 22:47:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c260786f-aa66-43d1-b6d3-76ea477be6c6 00:17:52.249 22:47:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:52.507 22:47:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:52.507 22:47:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c260786f-aa66-43d1-b6d3-76ea477be6c6 00:17:52.507 22:47:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:52.765 22:47:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:52.765 22:47:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ca75483d-08a1-46ee-822f-f0a5241516f5 00:17:53.023 22:47:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c260786f-aa66-43d1-b6d3-76ea477be6c6 00:17:53.281 22:47:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:53.539 22:47:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:53.539 00:17:53.539 real 0m18.977s 00:17:53.539 user 0m46.860s 00:17:53.539 sys 0m5.557s 00:17:53.539 22:47:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:53.539 22:47:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:53.539 ************************************ 00:17:53.539 END TEST lvs_grow_dirty 00:17:53.539 ************************************ 00:17:53.539 22:47:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:53.539 22:47:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:17:53.539 22:47:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:17:53.539 22:47:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:17:53.539 22:47:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:53.539 22:47:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:17:53.539 22:47:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:17:53.539 22:47:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:17:53.539 22:47:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:53.539 nvmf_trace.0 00:17:53.539 22:47:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:17:53.539 22:47:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:53.539 22:47:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:53.539 22:47:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:53.539 22:47:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:53.539 22:47:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:53.539 22:47:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:53.539 22:47:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:53.798 rmmod nvme_tcp 00:17:53.798 rmmod nvme_fabrics 00:17:53.798 rmmod nvme_keyring 00:17:53.798 22:47:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:53.798 22:47:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:53.798 22:47:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:53.798 22:47:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3525607 ']' 00:17:53.798 22:47:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3525607 00:17:53.798 22:47:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 3525607 ']' 00:17:53.798 22:47:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 3525607 00:17:53.798 22:47:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:17:53.798 22:47:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:53.798 22:47:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3525607 00:17:53.798 22:47:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:53.798 22:47:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:53.798 22:47:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3525607' 00:17:53.798 killing process with pid 3525607 00:17:53.798 22:47:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 3525607 00:17:53.798 22:47:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 3525607 00:17:54.056 22:47:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:54.056 22:47:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:54.056 22:47:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:54.056 22:47:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:54.056 22:47:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:54.056 22:47:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.056 22:47:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:54.056 22:47:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.955 22:47:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:55.955 00:17:55.955 real 0m41.770s 00:17:55.955 user 1m9.151s 00:17:55.955 sys 0m9.458s 00:17:55.955 22:47:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:55.955 22:47:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:55.955 ************************************ 00:17:55.955 END TEST nvmf_lvs_grow 00:17:55.955 ************************************ 00:17:55.955 22:47:48 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:55.955 22:47:48 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:55.955 22:47:48 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:55.955 22:47:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:55.955 ************************************ 00:17:55.955 START TEST nvmf_bdev_io_wait 00:17:55.955 ************************************ 00:17:55.955 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:56.212 * Looking for test storage... 00:17:56.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:56.213 22:47:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:58.113 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:58.114 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:58.114 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:58.114 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:58.114 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:58.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:58.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:17:58.114 00:17:58.114 --- 10.0.0.2 ping statistics --- 00:17:58.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.114 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:58.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:58.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:17:58.114 00:17:58.114 --- 10.0.0.1 ping statistics --- 00:17:58.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.114 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:58.114 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3528124 00:17:58.115 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:58.115 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3528124 00:17:58.115 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 3528124 ']' 00:17:58.115 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.115 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:58.115 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.115 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:58.115 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:58.374 [2024-07-26 22:47:50.623921] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:58.374 [2024-07-26 22:47:50.624009] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.374 EAL: No free 2048 kB hugepages reported on node 1 00:17:58.374 [2024-07-26 22:47:50.688952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:58.374 [2024-07-26 22:47:50.779986] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.374 [2024-07-26 22:47:50.780045] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.374 [2024-07-26 22:47:50.780057] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.374 [2024-07-26 22:47:50.780079] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.374 [2024-07-26 22:47:50.780089] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.374 [2024-07-26 22:47:50.780178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.374 [2024-07-26 22:47:50.780251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:58.374 [2024-07-26 22:47:50.780321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:58.374 [2024-07-26 22:47:50.780324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.374 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:58.374 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:17:58.374 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:58.374 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:58.374 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:58.374 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.374 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:58.374 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.374 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:58.374 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.374 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:58.374 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.374 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:58.632 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.632 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:58.632 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.632 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:58.632 [2024-07-26 22:47:50.942734] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:58.632 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.632 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:58.632 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.632 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:58.632 Malloc0 00:17:58.632 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.632 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:58.632 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.632 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:58.632 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.632 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:58.632 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.632 22:47:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:58.632 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.632 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:58.632 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:58.633 [2024-07-26 22:47:51.005604] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3528238 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3528240 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:58.633 { 00:17:58.633 "params": { 00:17:58.633 "name": "Nvme$subsystem", 00:17:58.633 "trtype": "$TEST_TRANSPORT", 00:17:58.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:58.633 "adrfam": "ipv4", 00:17:58.633 "trsvcid": "$NVMF_PORT", 00:17:58.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:58.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:58.633 "hdgst": ${hdgst:-false}, 00:17:58.633 "ddgst": ${ddgst:-false} 00:17:58.633 }, 00:17:58.633 "method": "bdev_nvme_attach_controller" 00:17:58.633 } 00:17:58.633 EOF 00:17:58.633 )") 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3528243 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:58.633 { 00:17:58.633 "params": { 00:17:58.633 "name": "Nvme$subsystem", 00:17:58.633 "trtype": "$TEST_TRANSPORT", 00:17:58.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:58.633 "adrfam": "ipv4", 00:17:58.633 "trsvcid": "$NVMF_PORT", 00:17:58.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:58.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:58.633 "hdgst": ${hdgst:-false}, 00:17:58.633 "ddgst": ${ddgst:-false} 00:17:58.633 }, 00:17:58.633 "method": "bdev_nvme_attach_controller" 00:17:58.633 } 00:17:58.633 EOF 00:17:58.633 )") 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3528247 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:58.633 { 00:17:58.633 "params": { 00:17:58.633 "name": "Nvme$subsystem", 00:17:58.633 "trtype": "$TEST_TRANSPORT", 00:17:58.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:58.633 "adrfam": "ipv4", 00:17:58.633 "trsvcid": "$NVMF_PORT", 00:17:58.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:58.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:58.633 "hdgst": ${hdgst:-false}, 00:17:58.633 "ddgst": ${ddgst:-false} 00:17:58.633 }, 00:17:58.633 "method": "bdev_nvme_attach_controller" 00:17:58.633 } 00:17:58.633 EOF 00:17:58.633 )") 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:58.633 { 00:17:58.633 "params": { 00:17:58.633 "name": "Nvme$subsystem", 00:17:58.633 "trtype": "$TEST_TRANSPORT", 00:17:58.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:58.633 "adrfam": "ipv4", 00:17:58.633 "trsvcid": "$NVMF_PORT", 00:17:58.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:58.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:58.633 "hdgst": ${hdgst:-false}, 00:17:58.633 "ddgst": ${ddgst:-false} 00:17:58.633 }, 00:17:58.633 "method": "bdev_nvme_attach_controller" 00:17:58.633 } 00:17:58.633 EOF 00:17:58.633 )") 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3528238 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:58.633 "params": { 00:17:58.633 "name": "Nvme1", 00:17:58.633 "trtype": "tcp", 00:17:58.633 "traddr": "10.0.0.2", 00:17:58.633 "adrfam": "ipv4", 00:17:58.633 "trsvcid": "4420", 00:17:58.633 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:58.633 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:58.633 "hdgst": false, 00:17:58.633 "ddgst": false 00:17:58.633 }, 00:17:58.633 "method": "bdev_nvme_attach_controller" 00:17:58.633 }' 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:58.633 "params": { 00:17:58.633 "name": "Nvme1", 00:17:58.633 "trtype": "tcp", 00:17:58.633 "traddr": "10.0.0.2", 00:17:58.633 "adrfam": "ipv4", 00:17:58.633 "trsvcid": "4420", 00:17:58.633 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:58.633 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:58.633 "hdgst": false, 00:17:58.633 "ddgst": false 00:17:58.633 }, 00:17:58.633 "method": "bdev_nvme_attach_controller" 00:17:58.633 }' 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:58.633 "params": { 00:17:58.633 "name": "Nvme1", 00:17:58.633 "trtype": "tcp", 00:17:58.633 "traddr": "10.0.0.2", 00:17:58.633 "adrfam": "ipv4", 00:17:58.633 "trsvcid": "4420", 00:17:58.633 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:58.633 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:58.633 "hdgst": false, 00:17:58.633 "ddgst": false 00:17:58.633 }, 00:17:58.633 "method": "bdev_nvme_attach_controller" 00:17:58.633 }' 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:58.633 22:47:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:58.633 "params": { 00:17:58.633 "name": "Nvme1", 00:17:58.633 "trtype": "tcp", 00:17:58.633 "traddr": "10.0.0.2", 00:17:58.633 "adrfam": "ipv4", 00:17:58.633 "trsvcid": "4420", 00:17:58.633 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:58.633 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:58.633 "hdgst": false, 00:17:58.633 "ddgst": false 00:17:58.633 }, 00:17:58.633 "method": "bdev_nvme_attach_controller" 00:17:58.633 }' 00:17:58.633 [2024-07-26 22:47:51.052040] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:58.633 [2024-07-26 22:47:51.052042] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:58.633 [2024-07-26 22:47:51.052136] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-26 22:47:51.052136] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:58.633 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:58.633 [2024-07-26 22:47:51.052632] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:58.633 [2024-07-26 22:47:51.052632] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:58.633 [2024-07-26 22:47:51.052709] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-26 22:47:51.052709] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:58.634 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:58.634 EAL: No free 2048 kB hugepages reported on node 1 00:17:58.892 EAL: No free 2048 kB hugepages reported on node 1 00:17:58.892 [2024-07-26 22:47:51.233203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.892 EAL: No free 2048 kB hugepages reported on node 1 00:17:58.892 [2024-07-26 22:47:51.309227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:58.892 [2024-07-26 22:47:51.354832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.892 EAL: No free 2048 kB hugepages reported on node 1 00:17:59.149 [2024-07-26 22:47:51.410755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.149 [2024-07-26 22:47:51.429908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:59.149 [2024-07-26 22:47:51.477385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.149 [2024-07-26 22:47:51.478470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:59.149 [2024-07-26 22:47:51.544493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:59.408 Running I/O for 1 seconds... 00:17:59.408 Running I/O for 1 seconds... 00:17:59.408 Running I/O for 1 seconds... 00:17:59.408 Running I/O for 1 seconds... 00:18:00.345 00:18:00.345 Latency(us) 00:18:00.345 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.345 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:00.345 Nvme1n1 : 1.01 10578.74 41.32 0.00 0.00 12052.73 6699.24 22330.79 00:18:00.345 =================================================================================================================== 00:18:00.345 Total : 10578.74 41.32 0.00 0.00 12052.73 6699.24 22330.79 00:18:00.345 00:18:00.345 Latency(us) 00:18:00.345 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.345 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:00.345 Nvme1n1 : 1.00 145403.07 567.98 0.00 0.00 876.94 259.41 1092.27 00:18:00.345 =================================================================================================================== 00:18:00.345 Total : 145403.07 567.98 0.00 0.00 876.94 259.41 1092.27 00:18:00.345 00:18:00.345 Latency(us) 00:18:00.345 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.345 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:00.345 Nvme1n1 : 1.01 6977.30 27.26 0.00 0.00 18257.99 7184.69 24855.13 00:18:00.345 =================================================================================================================== 00:18:00.346 Total : 6977.30 27.26 0.00 0.00 18257.99 7184.69 24855.13 00:18:00.603 00:18:00.603 Latency(us) 00:18:00.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.603 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:00.603 Nvme1n1 : 1.01 9195.84 35.92 0.00 0.00 13858.55 7330.32 21748.24 00:18:00.603 =================================================================================================================== 00:18:00.603 Total : 9195.84 35.92 0.00 0.00 13858.55 7330.32 21748.24 00:18:00.604 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3528240 00:18:00.862 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3528243 00:18:00.862 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3528247 00:18:00.862 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:00.862 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.862 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:00.862 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.862 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:00.862 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:00.862 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:00.862 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:18:00.862 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:00.862 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:18:00.862 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:00.862 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:00.862 rmmod nvme_tcp 00:18:00.862 rmmod nvme_fabrics 00:18:00.862 rmmod nvme_keyring 00:18:00.862 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:00.862 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:18:00.862 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:18:00.862 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3528124 ']' 00:18:00.862 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3528124 00:18:00.862 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 3528124 ']' 00:18:00.862 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 3528124 00:18:00.862 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:18:00.862 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:00.862 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3528124 00:18:00.862 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:00.862 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:00.862 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3528124' 00:18:00.862 killing process with pid 3528124 00:18:00.862 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 3528124 00:18:00.862 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 3528124 00:18:01.121 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:01.121 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:01.121 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:01.121 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:01.121 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:01.121 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.121 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:01.121 22:47:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.021 22:47:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:03.021 00:18:03.021 real 0m7.032s 00:18:03.021 user 0m16.050s 00:18:03.021 sys 0m3.575s 00:18:03.021 22:47:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:03.021 22:47:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:03.021 ************************************ 00:18:03.021 END TEST nvmf_bdev_io_wait 00:18:03.021 ************************************ 00:18:03.021 22:47:55 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:03.021 22:47:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:03.021 22:47:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:03.021 22:47:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:03.280 ************************************ 00:18:03.280 START TEST nvmf_queue_depth 00:18:03.280 ************************************ 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:03.280 * Looking for test storage... 00:18:03.280 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:18:03.280 22:47:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:05.181 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:05.181 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:18:05.181 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:05.181 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:05.181 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:05.181 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:05.181 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:05.181 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:18:05.181 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:05.181 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:18:05.181 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:18:05.181 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:18:05.181 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:18:05.181 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:18:05.181 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:18:05.181 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:05.181 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:05.181 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:05.181 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:05.181 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:05.181 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:05.181 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:05.181 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:05.181 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:05.181 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:05.181 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:05.181 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:05.181 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:05.181 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:05.182 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:05.182 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:05.182 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:05.182 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:05.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:05.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:18:05.182 00:18:05.182 --- 10.0.0.2 ping statistics --- 00:18:05.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.182 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:05.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:05.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:18:05.182 00:18:05.182 --- 10.0.0.1 ping statistics --- 00:18:05.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.182 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3530372 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3530372 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 3530372 ']' 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:05.182 22:47:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:05.182 [2024-07-26 22:47:57.674117] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:18:05.182 [2024-07-26 22:47:57.674188] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.441 EAL: No free 2048 kB hugepages reported on node 1 00:18:05.441 [2024-07-26 22:47:57.738839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.441 [2024-07-26 22:47:57.824758] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:05.441 [2024-07-26 22:47:57.824809] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:05.441 [2024-07-26 22:47:57.824837] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:05.441 [2024-07-26 22:47:57.824848] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:05.441 [2024-07-26 22:47:57.824857] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:05.441 [2024-07-26 22:47:57.824891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.441 22:47:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:05.441 22:47:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:18:05.441 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:05.441 22:47:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:05.441 22:47:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:05.699 22:47:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:05.699 22:47:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:05.699 22:47:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.699 22:47:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:05.699 [2024-07-26 22:47:57.966238] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:05.699 22:47:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.699 22:47:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:05.699 22:47:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.699 22:47:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:05.699 Malloc0 00:18:05.699 22:47:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.699 22:47:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:05.699 22:47:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.699 22:47:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:05.699 22:47:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.699 22:47:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:05.699 22:47:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.699 22:47:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:05.699 22:47:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.699 22:47:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:05.699 22:47:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.699 22:47:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:05.699 [2024-07-26 22:47:58.032149] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.699 22:47:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.699 22:47:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3530507 00:18:05.700 22:47:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:05.700 22:47:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3530507 /var/tmp/bdevperf.sock 00:18:05.700 22:47:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:05.700 22:47:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 3530507 ']' 00:18:05.700 22:47:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:05.700 22:47:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:05.700 22:47:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:05.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:05.700 22:47:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:05.700 22:47:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:05.700 [2024-07-26 22:47:58.080478] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:18:05.700 [2024-07-26 22:47:58.080554] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3530507 ] 00:18:05.700 EAL: No free 2048 kB hugepages reported on node 1 00:18:05.700 [2024-07-26 22:47:58.140482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.958 [2024-07-26 22:47:58.232673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.958 22:47:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:05.958 22:47:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:18:05.958 22:47:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:05.958 22:47:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.958 22:47:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:05.958 NVMe0n1 00:18:05.958 22:47:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.958 22:47:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:06.224 Running I/O for 10 seconds... 00:18:16.232 00:18:16.232 Latency(us) 00:18:16.232 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.232 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:16.232 Verification LBA range: start 0x0 length 0x4000 00:18:16.232 NVMe0n1 : 10.08 8217.76 32.10 0.00 0.00 124080.42 24758.04 76895.57 00:18:16.232 =================================================================================================================== 00:18:16.232 Total : 8217.76 32.10 0.00 0.00 124080.42 24758.04 76895.57 00:18:16.232 0 00:18:16.232 22:48:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3530507 00:18:16.232 22:48:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 3530507 ']' 00:18:16.232 22:48:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 3530507 00:18:16.232 22:48:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:18:16.232 22:48:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:16.232 22:48:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3530507 00:18:16.232 22:48:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:16.232 22:48:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:16.232 22:48:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3530507' 00:18:16.232 killing process with pid 3530507 00:18:16.232 22:48:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 3530507 00:18:16.232 Received shutdown signal, test time was about 10.000000 seconds 00:18:16.232 00:18:16.232 Latency(us) 00:18:16.232 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.232 =================================================================================================================== 00:18:16.232 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:16.232 22:48:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 3530507 00:18:16.490 22:48:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:16.490 22:48:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:16.490 22:48:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:16.490 22:48:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:18:16.490 22:48:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:16.490 22:48:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:18:16.490 22:48:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:16.490 22:48:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:16.490 rmmod nvme_tcp 00:18:16.490 rmmod nvme_fabrics 00:18:16.490 rmmod nvme_keyring 00:18:16.490 22:48:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:16.490 22:48:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:18:16.490 22:48:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:18:16.490 22:48:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3530372 ']' 00:18:16.490 22:48:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3530372 00:18:16.490 22:48:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 3530372 ']' 00:18:16.490 22:48:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 3530372 00:18:16.490 22:48:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:18:16.490 22:48:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:16.490 22:48:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3530372 00:18:16.490 22:48:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:16.490 22:48:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:16.490 22:48:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3530372' 00:18:16.490 killing process with pid 3530372 00:18:16.490 22:48:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 3530372 00:18:16.490 22:48:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 3530372 00:18:16.747 22:48:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:16.747 22:48:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:16.747 22:48:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:16.747 22:48:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:16.747 22:48:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:16.747 22:48:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.747 22:48:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:16.747 22:48:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.297 22:48:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:19.297 00:18:19.297 real 0m15.729s 00:18:19.297 user 0m21.337s 00:18:19.297 sys 0m3.365s 00:18:19.297 22:48:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:19.297 22:48:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:19.297 ************************************ 00:18:19.297 END TEST nvmf_queue_depth 00:18:19.297 ************************************ 00:18:19.297 22:48:11 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:19.297 22:48:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:19.297 22:48:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:19.297 22:48:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:19.297 ************************************ 00:18:19.297 START TEST nvmf_target_multipath 00:18:19.297 ************************************ 00:18:19.297 22:48:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:19.297 * Looking for test storage... 00:18:19.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:19.297 22:48:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:19.297 22:48:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:19.297 22:48:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:19.297 22:48:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:19.297 22:48:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:19.297 22:48:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:19.297 22:48:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:19.297 22:48:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:19.297 22:48:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:19.297 22:48:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:19.297 22:48:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:19.297 22:48:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:19.297 22:48:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:19.297 22:48:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:19.297 22:48:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:19.297 22:48:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:19.297 22:48:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:19.297 22:48:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:19.297 22:48:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:19.297 22:48:11 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:19.297 22:48:11 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:19.297 22:48:11 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:19.297 22:48:11 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.298 22:48:11 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.298 22:48:11 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.298 22:48:11 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:18:19.298 22:48:11 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.298 22:48:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:18:19.298 22:48:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:19.298 22:48:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:19.298 22:48:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:19.298 22:48:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:19.298 22:48:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:19.298 22:48:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:19.298 22:48:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:19.298 22:48:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:19.298 22:48:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:19.298 22:48:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:19.298 22:48:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:19.298 22:48:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:19.298 22:48:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:18:19.298 22:48:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:19.298 22:48:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:19.298 22:48:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:19.298 22:48:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:19.298 22:48:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:19.298 22:48:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.298 22:48:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:19.298 22:48:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.298 22:48:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:19.298 22:48:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:19.298 22:48:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:18:19.298 22:48:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:21.200 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:21.200 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:21.200 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:21.201 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:21.201 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:21.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:21.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:18:21.201 00:18:21.201 --- 10.0.0.2 ping statistics --- 00:18:21.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:21.201 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:21.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:21.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:18:21.201 00:18:21.201 --- 10.0.0.1 ping statistics --- 00:18:21.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:21.201 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:21.201 only one NIC for nvmf test 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:21.201 rmmod nvme_tcp 00:18:21.201 rmmod nvme_fabrics 00:18:21.201 rmmod nvme_keyring 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:21.201 22:48:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:23.103 22:48:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:23.103 22:48:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:18:23.103 22:48:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:18:23.103 22:48:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:23.103 22:48:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:23.103 22:48:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:23.103 22:48:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:23.103 22:48:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:23.103 22:48:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:23.103 22:48:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:23.103 22:48:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:23.103 22:48:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:23.103 22:48:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:23.103 22:48:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:23.103 22:48:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:23.103 22:48:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:23.103 22:48:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:23.103 22:48:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:23.103 22:48:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:23.103 22:48:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:23.103 22:48:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:23.103 22:48:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:23.103 00:18:23.103 real 0m4.189s 00:18:23.103 user 0m0.747s 00:18:23.103 sys 0m1.426s 00:18:23.103 22:48:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:23.103 22:48:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:23.103 ************************************ 00:18:23.103 END TEST nvmf_target_multipath 00:18:23.103 ************************************ 00:18:23.103 22:48:15 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:23.103 22:48:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:23.103 22:48:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:23.103 22:48:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:23.103 ************************************ 00:18:23.103 START TEST nvmf_zcopy 00:18:23.103 ************************************ 00:18:23.103 22:48:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:23.103 * Looking for test storage... 00:18:23.103 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:23.103 22:48:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:23.103 22:48:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:18:23.103 22:48:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:23.103 22:48:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:23.103 22:48:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:23.103 22:48:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:23.103 22:48:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:23.103 22:48:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:23.103 22:48:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:23.103 22:48:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:23.103 22:48:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:23.103 22:48:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:23.361 22:48:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:23.361 22:48:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:23.361 22:48:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:23.361 22:48:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:23.361 22:48:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:23.361 22:48:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:23.361 22:48:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:23.361 22:48:15 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:23.361 22:48:15 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:23.361 22:48:15 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:23.361 22:48:15 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.361 22:48:15 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.361 22:48:15 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.361 22:48:15 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:18:23.361 22:48:15 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.361 22:48:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:18:23.361 22:48:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:23.362 22:48:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:23.362 22:48:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:23.362 22:48:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:23.362 22:48:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:23.362 22:48:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:23.362 22:48:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:23.362 22:48:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:23.362 22:48:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:18:23.362 22:48:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:23.362 22:48:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:23.362 22:48:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:23.362 22:48:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:23.362 22:48:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:23.362 22:48:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:23.362 22:48:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:23.362 22:48:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:23.362 22:48:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:23.362 22:48:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:23.362 22:48:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:18:23.362 22:48:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:25.265 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:25.265 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:18:25.265 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:25.265 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:25.265 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:25.265 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:25.265 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:25.265 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:18:25.265 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:25.265 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:18:25.265 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:18:25.265 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:25.266 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:25.266 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:25.266 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:25.266 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:25.266 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:25.266 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:18:25.266 00:18:25.266 --- 10.0.0.2 ping statistics --- 00:18:25.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.266 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:25.266 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:25.266 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:18:25.266 00:18:25.266 --- 10.0.0.1 ping statistics --- 00:18:25.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.266 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3536178 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3536178 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 3536178 ']' 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:25.266 22:48:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:25.267 [2024-07-26 22:48:17.730330] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:18:25.267 [2024-07-26 22:48:17.730416] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:25.267 EAL: No free 2048 kB hugepages reported on node 1 00:18:25.525 [2024-07-26 22:48:17.798835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.525 [2024-07-26 22:48:17.888774] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:25.525 [2024-07-26 22:48:17.888831] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:25.525 [2024-07-26 22:48:17.888847] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:25.525 [2024-07-26 22:48:17.888861] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:25.525 [2024-07-26 22:48:17.888873] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:25.526 [2024-07-26 22:48:17.888904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.526 22:48:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:25.526 22:48:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:18:25.526 22:48:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:25.526 22:48:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:25.526 22:48:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:25.526 22:48:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:25.526 22:48:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:25.526 22:48:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:25.526 22:48:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.526 22:48:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:25.785 [2024-07-26 22:48:18.030387] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:25.785 22:48:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.785 22:48:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:25.785 22:48:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.785 22:48:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:25.785 22:48:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.785 22:48:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:25.785 22:48:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.785 22:48:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:25.785 [2024-07-26 22:48:18.046585] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:25.785 22:48:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.785 22:48:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:25.785 22:48:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.785 22:48:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:25.785 22:48:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.785 22:48:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:25.785 22:48:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.785 22:48:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:25.785 malloc0 00:18:25.785 22:48:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.785 22:48:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:25.785 22:48:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.785 22:48:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:25.785 22:48:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.785 22:48:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:25.785 22:48:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:25.785 22:48:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:25.785 22:48:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:25.785 22:48:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:25.785 22:48:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:25.785 { 00:18:25.785 "params": { 00:18:25.785 "name": "Nvme$subsystem", 00:18:25.785 "trtype": "$TEST_TRANSPORT", 00:18:25.785 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:25.785 "adrfam": "ipv4", 00:18:25.785 "trsvcid": "$NVMF_PORT", 00:18:25.785 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:25.785 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:25.785 "hdgst": ${hdgst:-false}, 00:18:25.785 "ddgst": ${ddgst:-false} 00:18:25.785 }, 00:18:25.785 "method": "bdev_nvme_attach_controller" 00:18:25.785 } 00:18:25.785 EOF 00:18:25.785 )") 00:18:25.785 22:48:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:25.785 22:48:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:25.785 22:48:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:25.785 22:48:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:25.785 "params": { 00:18:25.785 "name": "Nvme1", 00:18:25.785 "trtype": "tcp", 00:18:25.785 "traddr": "10.0.0.2", 00:18:25.785 "adrfam": "ipv4", 00:18:25.785 "trsvcid": "4420", 00:18:25.785 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:25.785 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:25.785 "hdgst": false, 00:18:25.785 "ddgst": false 00:18:25.785 }, 00:18:25.785 "method": "bdev_nvme_attach_controller" 00:18:25.785 }' 00:18:25.785 [2024-07-26 22:48:18.129994] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:18:25.785 [2024-07-26 22:48:18.130080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3536204 ] 00:18:25.785 EAL: No free 2048 kB hugepages reported on node 1 00:18:25.785 [2024-07-26 22:48:18.191971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.785 [2024-07-26 22:48:18.285634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.044 Running I/O for 10 seconds... 00:18:36.015 00:18:36.015 Latency(us) 00:18:36.015 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.015 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:36.015 Verification LBA range: start 0x0 length 0x1000 00:18:36.015 Nvme1n1 : 10.02 5816.84 45.44 0.00 0.00 21945.06 3689.43 33399.09 00:18:36.015 =================================================================================================================== 00:18:36.015 Total : 5816.84 45.44 0.00 0.00 21945.06 3689.43 33399.09 00:18:36.273 22:48:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3537511 00:18:36.273 22:48:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:18:36.273 22:48:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:36.273 22:48:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:36.273 22:48:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:36.273 22:48:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:36.273 22:48:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:36.273 22:48:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:36.273 22:48:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:36.273 { 00:18:36.273 "params": { 00:18:36.273 "name": "Nvme$subsystem", 00:18:36.273 "trtype": "$TEST_TRANSPORT", 00:18:36.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:36.273 "adrfam": "ipv4", 00:18:36.273 "trsvcid": "$NVMF_PORT", 00:18:36.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:36.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:36.273 "hdgst": ${hdgst:-false}, 00:18:36.273 "ddgst": ${ddgst:-false} 00:18:36.273 }, 00:18:36.273 "method": "bdev_nvme_attach_controller" 00:18:36.273 } 00:18:36.273 EOF 00:18:36.273 )") 00:18:36.273 22:48:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:36.273 22:48:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:36.273 [2024-07-26 22:48:28.749756] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.273 [2024-07-26 22:48:28.749806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.273 22:48:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:36.273 22:48:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:36.273 "params": { 00:18:36.273 "name": "Nvme1", 00:18:36.273 "trtype": "tcp", 00:18:36.273 "traddr": "10.0.0.2", 00:18:36.273 "adrfam": "ipv4", 00:18:36.273 "trsvcid": "4420", 00:18:36.273 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.273 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:36.273 "hdgst": false, 00:18:36.273 "ddgst": false 00:18:36.273 }, 00:18:36.273 "method": "bdev_nvme_attach_controller" 00:18:36.273 }' 00:18:36.273 [2024-07-26 22:48:28.757699] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.273 [2024-07-26 22:48:28.757726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.273 [2024-07-26 22:48:28.765718] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.273 [2024-07-26 22:48:28.765743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.273 [2024-07-26 22:48:28.773738] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.273 [2024-07-26 22:48:28.773762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.532 [2024-07-26 22:48:28.781759] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.532 [2024-07-26 22:48:28.781784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.532 [2024-07-26 22:48:28.785640] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:18:36.532 [2024-07-26 22:48:28.785714] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3537511 ] 00:18:36.532 [2024-07-26 22:48:28.789784] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.532 [2024-07-26 22:48:28.789810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.532 [2024-07-26 22:48:28.797804] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.532 [2024-07-26 22:48:28.797829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.532 [2024-07-26 22:48:28.805826] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.532 [2024-07-26 22:48:28.805851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.532 EAL: No free 2048 kB hugepages reported on node 1 00:18:36.532 [2024-07-26 22:48:28.813848] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.532 [2024-07-26 22:48:28.813872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.532 [2024-07-26 22:48:28.821870] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.532 [2024-07-26 22:48:28.821895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.532 [2024-07-26 22:48:28.829893] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.532 [2024-07-26 22:48:28.829917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.532 [2024-07-26 22:48:28.837913] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.532 [2024-07-26 22:48:28.837938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.532 [2024-07-26 22:48:28.845935] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.532 [2024-07-26 22:48:28.845960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.532 [2024-07-26 22:48:28.848539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.532 [2024-07-26 22:48:28.853976] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.532 [2024-07-26 22:48:28.854007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.532 [2024-07-26 22:48:28.862021] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.532 [2024-07-26 22:48:28.862073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.532 [2024-07-26 22:48:28.870004] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.532 [2024-07-26 22:48:28.870029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.532 [2024-07-26 22:48:28.878057] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.532 [2024-07-26 22:48:28.878116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.532 [2024-07-26 22:48:28.886051] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.532 [2024-07-26 22:48:28.886086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.532 [2024-07-26 22:48:28.894077] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.532 [2024-07-26 22:48:28.894116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.532 [2024-07-26 22:48:28.902128] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.532 [2024-07-26 22:48:28.902158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.532 [2024-07-26 22:48:28.910176] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.532 [2024-07-26 22:48:28.910216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.532 [2024-07-26 22:48:28.918153] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.532 [2024-07-26 22:48:28.918188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.532 [2024-07-26 22:48:28.926167] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.532 [2024-07-26 22:48:28.926189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.532 [2024-07-26 22:48:28.934185] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.532 [2024-07-26 22:48:28.934206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.532 [2024-07-26 22:48:28.942193] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.532 [2024-07-26 22:48:28.942214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.532 [2024-07-26 22:48:28.947317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.532 [2024-07-26 22:48:28.950212] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.532 [2024-07-26 22:48:28.950234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.532 [2024-07-26 22:48:28.958237] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.532 [2024-07-26 22:48:28.958259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.532 [2024-07-26 22:48:28.966302] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.532 [2024-07-26 22:48:28.966339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.532 [2024-07-26 22:48:28.974323] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.533 [2024-07-26 22:48:28.974380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.533 [2024-07-26 22:48:28.982340] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.533 [2024-07-26 22:48:28.982395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.533 [2024-07-26 22:48:28.990382] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.533 [2024-07-26 22:48:28.990425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.533 [2024-07-26 22:48:28.998404] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.533 [2024-07-26 22:48:28.998460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.533 [2024-07-26 22:48:29.006436] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.533 [2024-07-26 22:48:29.006482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.533 [2024-07-26 22:48:29.014480] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.533 [2024-07-26 22:48:29.014526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.533 [2024-07-26 22:48:29.022446] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.533 [2024-07-26 22:48:29.022473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.533 [2024-07-26 22:48:29.030500] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.533 [2024-07-26 22:48:29.030545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.794 [2024-07-26 22:48:29.038527] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.794 [2024-07-26 22:48:29.038570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.794 [2024-07-26 22:48:29.046544] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.794 [2024-07-26 22:48:29.046581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.794 [2024-07-26 22:48:29.054529] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.794 [2024-07-26 22:48:29.054554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.794 [2024-07-26 22:48:29.062557] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.794 [2024-07-26 22:48:29.062584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.794 [2024-07-26 22:48:29.070574] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.794 [2024-07-26 22:48:29.070599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.794 [2024-07-26 22:48:29.078607] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.794 [2024-07-26 22:48:29.078637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.794 [2024-07-26 22:48:29.086628] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.794 [2024-07-26 22:48:29.086656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.794 [2024-07-26 22:48:29.094652] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.794 [2024-07-26 22:48:29.094679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.794 [2024-07-26 22:48:29.102671] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.794 [2024-07-26 22:48:29.102698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.794 [2024-07-26 22:48:29.110691] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.794 [2024-07-26 22:48:29.110717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.794 [2024-07-26 22:48:29.118712] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.794 [2024-07-26 22:48:29.118737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.794 [2024-07-26 22:48:29.126738] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.794 [2024-07-26 22:48:29.126764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.794 [2024-07-26 22:48:29.134759] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.794 [2024-07-26 22:48:29.134784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.794 [2024-07-26 22:48:29.142786] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.794 [2024-07-26 22:48:29.142813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.794 [2024-07-26 22:48:29.150814] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.794 [2024-07-26 22:48:29.150842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.794 [2024-07-26 22:48:29.158838] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.794 [2024-07-26 22:48:29.158866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.794 [2024-07-26 22:48:29.166854] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.794 [2024-07-26 22:48:29.166880] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.794 [2024-07-26 22:48:29.174884] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.794 [2024-07-26 22:48:29.174914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.794 [2024-07-26 22:48:29.182899] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.794 [2024-07-26 22:48:29.182925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.794 Running I/O for 5 seconds... 00:18:36.794 [2024-07-26 22:48:29.190922] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.794 [2024-07-26 22:48:29.190947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.794 [2024-07-26 22:48:29.203855] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.794 [2024-07-26 22:48:29.203884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.794 [2024-07-26 22:48:29.214922] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.794 [2024-07-26 22:48:29.214951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.794 [2024-07-26 22:48:29.226533] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.794 [2024-07-26 22:48:29.226568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.794 [2024-07-26 22:48:29.237365] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.794 [2024-07-26 22:48:29.237392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.794 [2024-07-26 22:48:29.250067] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.794 [2024-07-26 22:48:29.250095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.794 [2024-07-26 22:48:29.259763] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.794 [2024-07-26 22:48:29.259791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.794 [2024-07-26 22:48:29.271817] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.794 [2024-07-26 22:48:29.271845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.794 [2024-07-26 22:48:29.282442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.794 [2024-07-26 22:48:29.282470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.794 [2024-07-26 22:48:29.293190] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.794 [2024-07-26 22:48:29.293217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.091 [2024-07-26 22:48:29.304083] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.091 [2024-07-26 22:48:29.304110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.091 [2024-07-26 22:48:29.315086] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.091 [2024-07-26 22:48:29.315113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.091 [2024-07-26 22:48:29.325794] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.091 [2024-07-26 22:48:29.325824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.091 [2024-07-26 22:48:29.336567] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.091 [2024-07-26 22:48:29.336595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.091 [2024-07-26 22:48:29.349306] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.091 [2024-07-26 22:48:29.349333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.091 [2024-07-26 22:48:29.359258] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.091 [2024-07-26 22:48:29.359285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.091 [2024-07-26 22:48:29.371036] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.091 [2024-07-26 22:48:29.371073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.091 [2024-07-26 22:48:29.382029] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.091 [2024-07-26 22:48:29.382056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.091 [2024-07-26 22:48:29.392981] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.091 [2024-07-26 22:48:29.393009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.091 [2024-07-26 22:48:29.403860] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.091 [2024-07-26 22:48:29.403886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.091 [2024-07-26 22:48:29.416696] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.091 [2024-07-26 22:48:29.416722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.091 [2024-07-26 22:48:29.426972] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.091 [2024-07-26 22:48:29.426998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.091 [2024-07-26 22:48:29.437783] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.091 [2024-07-26 22:48:29.437818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.091 [2024-07-26 22:48:29.449000] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.091 [2024-07-26 22:48:29.449026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.091 [2024-07-26 22:48:29.460002] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.091 [2024-07-26 22:48:29.460028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.091 [2024-07-26 22:48:29.473007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.091 [2024-07-26 22:48:29.473033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.091 [2024-07-26 22:48:29.482855] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.091 [2024-07-26 22:48:29.482881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.091 [2024-07-26 22:48:29.493976] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.091 [2024-07-26 22:48:29.494002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.091 [2024-07-26 22:48:29.504749] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.091 [2024-07-26 22:48:29.504775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.091 [2024-07-26 22:48:29.515901] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.091 [2024-07-26 22:48:29.515928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.091 [2024-07-26 22:48:29.526826] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.091 [2024-07-26 22:48:29.526853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.091 [2024-07-26 22:48:29.537650] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.091 [2024-07-26 22:48:29.537677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.091 [2024-07-26 22:48:29.548766] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.091 [2024-07-26 22:48:29.548793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.091 [2024-07-26 22:48:29.559689] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.091 [2024-07-26 22:48:29.559717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.091 [2024-07-26 22:48:29.570521] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.091 [2024-07-26 22:48:29.570564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.350 [2024-07-26 22:48:29.581841] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.350 [2024-07-26 22:48:29.581872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.350 [2024-07-26 22:48:29.593285] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.350 [2024-07-26 22:48:29.593317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.350 [2024-07-26 22:48:29.604457] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.350 [2024-07-26 22:48:29.604485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.350 [2024-07-26 22:48:29.615335] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.350 [2024-07-26 22:48:29.615380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.351 [2024-07-26 22:48:29.626138] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.351 [2024-07-26 22:48:29.626166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.351 [2024-07-26 22:48:29.637424] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.351 [2024-07-26 22:48:29.637451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.351 [2024-07-26 22:48:29.648405] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.351 [2024-07-26 22:48:29.648441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.351 [2024-07-26 22:48:29.659594] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.351 [2024-07-26 22:48:29.659624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.351 [2024-07-26 22:48:29.670717] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.351 [2024-07-26 22:48:29.670744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.351 [2024-07-26 22:48:29.682077] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.351 [2024-07-26 22:48:29.682121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.351 [2024-07-26 22:48:29.692983] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.351 [2024-07-26 22:48:29.693010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.351 [2024-07-26 22:48:29.706123] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.351 [2024-07-26 22:48:29.706150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.351 [2024-07-26 22:48:29.716470] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.351 [2024-07-26 22:48:29.716496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.351 [2024-07-26 22:48:29.727590] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.351 [2024-07-26 22:48:29.727617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.351 [2024-07-26 22:48:29.738684] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.351 [2024-07-26 22:48:29.738710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.351 [2024-07-26 22:48:29.749516] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.351 [2024-07-26 22:48:29.749543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.351 [2024-07-26 22:48:29.760252] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.351 [2024-07-26 22:48:29.760279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.351 [2024-07-26 22:48:29.771236] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.351 [2024-07-26 22:48:29.771263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.351 [2024-07-26 22:48:29.782442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.351 [2024-07-26 22:48:29.782468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.351 [2024-07-26 22:48:29.793869] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.351 [2024-07-26 22:48:29.793896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.351 [2024-07-26 22:48:29.806894] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.351 [2024-07-26 22:48:29.806921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.351 [2024-07-26 22:48:29.816973] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.351 [2024-07-26 22:48:29.816999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.351 [2024-07-26 22:48:29.827899] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.351 [2024-07-26 22:48:29.827926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.351 [2024-07-26 22:48:29.838606] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.351 [2024-07-26 22:48:29.838632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.351 [2024-07-26 22:48:29.849386] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.351 [2024-07-26 22:48:29.849414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.610 [2024-07-26 22:48:29.860099] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.610 [2024-07-26 22:48:29.860136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.610 [2024-07-26 22:48:29.870805] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.610 [2024-07-26 22:48:29.870831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.610 [2024-07-26 22:48:29.883738] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.610 [2024-07-26 22:48:29.883765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.610 [2024-07-26 22:48:29.893725] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.610 [2024-07-26 22:48:29.893751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.610 [2024-07-26 22:48:29.904768] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.610 [2024-07-26 22:48:29.904795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.610 [2024-07-26 22:48:29.918186] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.610 [2024-07-26 22:48:29.918214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.610 [2024-07-26 22:48:29.928652] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.610 [2024-07-26 22:48:29.928678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.610 [2024-07-26 22:48:29.939759] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.610 [2024-07-26 22:48:29.939785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.610 [2024-07-26 22:48:29.953033] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.610 [2024-07-26 22:48:29.953081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.610 [2024-07-26 22:48:29.963691] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.610 [2024-07-26 22:48:29.963717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.610 [2024-07-26 22:48:29.974646] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.610 [2024-07-26 22:48:29.974672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.610 [2024-07-26 22:48:29.985474] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.610 [2024-07-26 22:48:29.985500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.610 [2024-07-26 22:48:29.996386] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.610 [2024-07-26 22:48:29.996413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.610 [2024-07-26 22:48:30.009457] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.610 [2024-07-26 22:48:30.009484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.610 [2024-07-26 22:48:30.020710] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.610 [2024-07-26 22:48:30.020741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.610 [2024-07-26 22:48:30.031740] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.610 [2024-07-26 22:48:30.031769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.610 [2024-07-26 22:48:30.042329] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.610 [2024-07-26 22:48:30.042358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.610 [2024-07-26 22:48:30.053287] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.610 [2024-07-26 22:48:30.053316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.610 [2024-07-26 22:48:30.064695] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.610 [2024-07-26 22:48:30.064724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.610 [2024-07-26 22:48:30.076184] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.610 [2024-07-26 22:48:30.076223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.610 [2024-07-26 22:48:30.087409] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.610 [2024-07-26 22:48:30.087436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.610 [2024-07-26 22:48:30.100433] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.610 [2024-07-26 22:48:30.100463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.610 [2024-07-26 22:48:30.110392] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.610 [2024-07-26 22:48:30.110420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.869 [2024-07-26 22:48:30.122223] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.869 [2024-07-26 22:48:30.122251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.869 [2024-07-26 22:48:30.133705] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.869 [2024-07-26 22:48:30.133733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.869 [2024-07-26 22:48:30.146619] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.869 [2024-07-26 22:48:30.146649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.869 [2024-07-26 22:48:30.156559] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.869 [2024-07-26 22:48:30.156591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.869 [2024-07-26 22:48:30.168615] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.869 [2024-07-26 22:48:30.168642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.869 [2024-07-26 22:48:30.179883] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.869 [2024-07-26 22:48:30.179925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.869 [2024-07-26 22:48:30.190957] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.869 [2024-07-26 22:48:30.190983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.869 [2024-07-26 22:48:30.201806] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.869 [2024-07-26 22:48:30.201832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.869 [2024-07-26 22:48:30.212999] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.869 [2024-07-26 22:48:30.213025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.869 [2024-07-26 22:48:30.223956] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.869 [2024-07-26 22:48:30.223982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.869 [2024-07-26 22:48:30.237376] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.869 [2024-07-26 22:48:30.237405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.869 [2024-07-26 22:48:30.247583] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.869 [2024-07-26 22:48:30.247609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.869 [2024-07-26 22:48:30.259154] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.869 [2024-07-26 22:48:30.259181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.869 [2024-07-26 22:48:30.270100] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.869 [2024-07-26 22:48:30.270126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.869 [2024-07-26 22:48:30.283128] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.869 [2024-07-26 22:48:30.283155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.869 [2024-07-26 22:48:30.293467] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.869 [2024-07-26 22:48:30.293494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.869 [2024-07-26 22:48:30.305268] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.869 [2024-07-26 22:48:30.305293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.869 [2024-07-26 22:48:30.316594] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.869 [2024-07-26 22:48:30.316620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.869 [2024-07-26 22:48:30.329984] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.869 [2024-07-26 22:48:30.330010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.869 [2024-07-26 22:48:30.340400] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.869 [2024-07-26 22:48:30.340426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.869 [2024-07-26 22:48:30.351099] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.869 [2024-07-26 22:48:30.351125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.869 [2024-07-26 22:48:30.361994] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.869 [2024-07-26 22:48:30.362022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.128 [2024-07-26 22:48:30.372605] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.128 [2024-07-26 22:48:30.372631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.128 [2024-07-26 22:48:30.383267] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.128 [2024-07-26 22:48:30.383292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.128 [2024-07-26 22:48:30.393651] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.128 [2024-07-26 22:48:30.393677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.128 [2024-07-26 22:48:30.405867] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.128 [2024-07-26 22:48:30.405895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.128 [2024-07-26 22:48:30.415247] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.128 [2024-07-26 22:48:30.415275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.128 [2024-07-26 22:48:30.426497] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.128 [2024-07-26 22:48:30.426525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.128 [2024-07-26 22:48:30.437287] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.128 [2024-07-26 22:48:30.437314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.128 [2024-07-26 22:48:30.450164] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.128 [2024-07-26 22:48:30.450191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.128 [2024-07-26 22:48:30.459724] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.128 [2024-07-26 22:48:30.459753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.128 [2024-07-26 22:48:30.471916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.128 [2024-07-26 22:48:30.471947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.128 [2024-07-26 22:48:30.482738] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.128 [2024-07-26 22:48:30.482766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.128 [2024-07-26 22:48:30.493311] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.128 [2024-07-26 22:48:30.493338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.128 [2024-07-26 22:48:30.506135] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.128 [2024-07-26 22:48:30.506162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.128 [2024-07-26 22:48:30.516254] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.128 [2024-07-26 22:48:30.516280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.128 [2024-07-26 22:48:30.527966] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.128 [2024-07-26 22:48:30.527993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.128 [2024-07-26 22:48:30.538874] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.128 [2024-07-26 22:48:30.538900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.128 [2024-07-26 22:48:30.549707] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.128 [2024-07-26 22:48:30.549734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.128 [2024-07-26 22:48:30.560618] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.128 [2024-07-26 22:48:30.560645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.128 [2024-07-26 22:48:30.571083] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.128 [2024-07-26 22:48:30.571111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.128 [2024-07-26 22:48:30.582155] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.128 [2024-07-26 22:48:30.582194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.128 [2024-07-26 22:48:30.592913] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.128 [2024-07-26 22:48:30.592940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.128 [2024-07-26 22:48:30.603809] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.128 [2024-07-26 22:48:30.603840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.128 [2024-07-26 22:48:30.614610] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.128 [2024-07-26 22:48:30.614637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.128 [2024-07-26 22:48:30.625800] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.128 [2024-07-26 22:48:30.625827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.387 [2024-07-26 22:48:30.637077] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.387 [2024-07-26 22:48:30.637120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.387 [2024-07-26 22:48:30.648514] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.387 [2024-07-26 22:48:30.648541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.387 [2024-07-26 22:48:30.659544] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.387 [2024-07-26 22:48:30.659571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.387 [2024-07-26 22:48:30.672593] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.387 [2024-07-26 22:48:30.672620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.387 [2024-07-26 22:48:30.682261] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.387 [2024-07-26 22:48:30.682288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.387 [2024-07-26 22:48:30.693425] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.387 [2024-07-26 22:48:30.693452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.387 [2024-07-26 22:48:30.704246] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.387 [2024-07-26 22:48:30.704275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.387 [2024-07-26 22:48:30.715182] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.387 [2024-07-26 22:48:30.715209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.387 [2024-07-26 22:48:30.727481] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.387 [2024-07-26 22:48:30.727509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.387 [2024-07-26 22:48:30.737555] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.387 [2024-07-26 22:48:30.737583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.387 [2024-07-26 22:48:30.748932] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.387 [2024-07-26 22:48:30.748960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.387 [2024-07-26 22:48:30.759868] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.387 [2024-07-26 22:48:30.759896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.387 [2024-07-26 22:48:30.770707] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.387 [2024-07-26 22:48:30.770735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.387 [2024-07-26 22:48:30.781580] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.387 [2024-07-26 22:48:30.781608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.387 [2024-07-26 22:48:30.792252] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.387 [2024-07-26 22:48:30.792279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.387 [2024-07-26 22:48:30.803295] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.387 [2024-07-26 22:48:30.803323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.387 [2024-07-26 22:48:30.815860] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.387 [2024-07-26 22:48:30.815888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.387 [2024-07-26 22:48:30.826057] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.387 [2024-07-26 22:48:30.826118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.387 [2024-07-26 22:48:30.837514] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.387 [2024-07-26 22:48:30.837541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.387 [2024-07-26 22:48:30.848806] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.387 [2024-07-26 22:48:30.848833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.387 [2024-07-26 22:48:30.862269] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.387 [2024-07-26 22:48:30.862297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.387 [2024-07-26 22:48:30.872144] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.387 [2024-07-26 22:48:30.872171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.387 [2024-07-26 22:48:30.883246] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.387 [2024-07-26 22:48:30.883273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.646 [2024-07-26 22:48:30.895878] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.646 [2024-07-26 22:48:30.895905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.646 [2024-07-26 22:48:30.905698] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.646 [2024-07-26 22:48:30.905725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.646 [2024-07-26 22:48:30.916580] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.646 [2024-07-26 22:48:30.916617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.646 [2024-07-26 22:48:30.927553] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.646 [2024-07-26 22:48:30.927580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.646 [2024-07-26 22:48:30.938555] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.646 [2024-07-26 22:48:30.938582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.646 [2024-07-26 22:48:30.951028] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.646 [2024-07-26 22:48:30.951055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.646 [2024-07-26 22:48:30.960670] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.646 [2024-07-26 22:48:30.960697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.646 [2024-07-26 22:48:30.971728] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.646 [2024-07-26 22:48:30.971756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.646 [2024-07-26 22:48:30.982599] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.646 [2024-07-26 22:48:30.982626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.646 [2024-07-26 22:48:30.993410] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.646 [2024-07-26 22:48:30.993436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.646 [2024-07-26 22:48:31.004328] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.646 [2024-07-26 22:48:31.004355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.646 [2024-07-26 22:48:31.016892] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.646 [2024-07-26 22:48:31.016919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.646 [2024-07-26 22:48:31.026675] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.646 [2024-07-26 22:48:31.026702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.646 [2024-07-26 22:48:31.037490] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.646 [2024-07-26 22:48:31.037517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.646 [2024-07-26 22:48:31.048468] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.646 [2024-07-26 22:48:31.048494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.646 [2024-07-26 22:48:31.059787] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.646 [2024-07-26 22:48:31.059814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.646 [2024-07-26 22:48:31.070535] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.646 [2024-07-26 22:48:31.070562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.646 [2024-07-26 22:48:31.081316] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.646 [2024-07-26 22:48:31.081343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.646 [2024-07-26 22:48:31.091955] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.646 [2024-07-26 22:48:31.091983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.646 [2024-07-26 22:48:31.102574] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.646 [2024-07-26 22:48:31.102617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.646 [2024-07-26 22:48:31.115966] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.646 [2024-07-26 22:48:31.116009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.646 [2024-07-26 22:48:31.126621] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.646 [2024-07-26 22:48:31.126655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.646 [2024-07-26 22:48:31.137585] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.646 [2024-07-26 22:48:31.137612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.905 [2024-07-26 22:48:31.150493] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.905 [2024-07-26 22:48:31.150520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.905 [2024-07-26 22:48:31.160344] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.905 [2024-07-26 22:48:31.160370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.905 [2024-07-26 22:48:31.170994] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.905 [2024-07-26 22:48:31.171020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.905 [2024-07-26 22:48:31.181918] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.905 [2024-07-26 22:48:31.181945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.905 [2024-07-26 22:48:31.193259] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.905 [2024-07-26 22:48:31.193285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.905 [2024-07-26 22:48:31.204158] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.905 [2024-07-26 22:48:31.204184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.905 [2024-07-26 22:48:31.214874] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.905 [2024-07-26 22:48:31.214901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.905 [2024-07-26 22:48:31.228332] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.905 [2024-07-26 22:48:31.228359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.905 [2024-07-26 22:48:31.238469] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.905 [2024-07-26 22:48:31.238495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.905 [2024-07-26 22:48:31.249432] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.905 [2024-07-26 22:48:31.249462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.905 [2024-07-26 22:48:31.262715] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.905 [2024-07-26 22:48:31.262742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.905 [2024-07-26 22:48:31.272987] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.905 [2024-07-26 22:48:31.273013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.905 [2024-07-26 22:48:31.283984] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.905 [2024-07-26 22:48:31.284011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.905 [2024-07-26 22:48:31.294780] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.905 [2024-07-26 22:48:31.294807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.905 [2024-07-26 22:48:31.305913] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.905 [2024-07-26 22:48:31.305939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.905 [2024-07-26 22:48:31.318723] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.905 [2024-07-26 22:48:31.318750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.905 [2024-07-26 22:48:31.328952] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.905 [2024-07-26 22:48:31.328978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.905 [2024-07-26 22:48:31.339883] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.905 [2024-07-26 22:48:31.339918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.905 [2024-07-26 22:48:31.352468] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.905 [2024-07-26 22:48:31.352495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.905 [2024-07-26 22:48:31.362094] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.905 [2024-07-26 22:48:31.362120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.905 [2024-07-26 22:48:31.373686] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.905 [2024-07-26 22:48:31.373713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.905 [2024-07-26 22:48:31.386619] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.905 [2024-07-26 22:48:31.386661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.905 [2024-07-26 22:48:31.396227] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.905 [2024-07-26 22:48:31.396254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.905 [2024-07-26 22:48:31.407510] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.905 [2024-07-26 22:48:31.407537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.164 [2024-07-26 22:48:31.417991] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.164 [2024-07-26 22:48:31.418018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.164 [2024-07-26 22:48:31.428776] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.164 [2024-07-26 22:48:31.428804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.164 [2024-07-26 22:48:31.439715] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.164 [2024-07-26 22:48:31.439742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.164 [2024-07-26 22:48:31.450672] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.164 [2024-07-26 22:48:31.450698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.164 [2024-07-26 22:48:31.461445] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.164 [2024-07-26 22:48:31.461472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.164 [2024-07-26 22:48:31.472188] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.164 [2024-07-26 22:48:31.472216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.164 [2024-07-26 22:48:31.483476] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.164 [2024-07-26 22:48:31.483502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.164 [2024-07-26 22:48:31.494231] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.164 [2024-07-26 22:48:31.494258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.164 [2024-07-26 22:48:31.505026] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.164 [2024-07-26 22:48:31.505052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.164 [2024-07-26 22:48:31.518104] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.164 [2024-07-26 22:48:31.518131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.164 [2024-07-26 22:48:31.527666] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.164 [2024-07-26 22:48:31.527709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.164 [2024-07-26 22:48:31.538963] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.164 [2024-07-26 22:48:31.538991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.164 [2024-07-26 22:48:31.549685] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.164 [2024-07-26 22:48:31.549722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.164 [2024-07-26 22:48:31.560745] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.164 [2024-07-26 22:48:31.560772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.164 [2024-07-26 22:48:31.571380] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.164 [2024-07-26 22:48:31.571407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.164 [2024-07-26 22:48:31.581386] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.164 [2024-07-26 22:48:31.581414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.164 [2024-07-26 22:48:31.592824] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.164 [2024-07-26 22:48:31.592851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.164 [2024-07-26 22:48:31.603781] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.164 [2024-07-26 22:48:31.603809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.164 [2024-07-26 22:48:31.614473] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.164 [2024-07-26 22:48:31.614499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.164 [2024-07-26 22:48:31.625707] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.164 [2024-07-26 22:48:31.625734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.164 [2024-07-26 22:48:31.636535] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.164 [2024-07-26 22:48:31.636576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.164 [2024-07-26 22:48:31.647363] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.164 [2024-07-26 22:48:31.647390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.164 [2024-07-26 22:48:31.658133] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.164 [2024-07-26 22:48:31.658161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.423 [2024-07-26 22:48:31.670762] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.423 [2024-07-26 22:48:31.670788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.423 [2024-07-26 22:48:31.680139] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.423 [2024-07-26 22:48:31.680166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.423 [2024-07-26 22:48:31.691892] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.423 [2024-07-26 22:48:31.691919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.423 [2024-07-26 22:48:31.702739] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.423 [2024-07-26 22:48:31.702765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.423 [2024-07-26 22:48:31.713392] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.423 [2024-07-26 22:48:31.713418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.423 [2024-07-26 22:48:31.723920] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.423 [2024-07-26 22:48:31.723947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.423 [2024-07-26 22:48:31.736070] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.423 [2024-07-26 22:48:31.736096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.423 [2024-07-26 22:48:31.745534] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.423 [2024-07-26 22:48:31.745560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.423 [2024-07-26 22:48:31.757267] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.423 [2024-07-26 22:48:31.757304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.423 [2024-07-26 22:48:31.768074] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.423 [2024-07-26 22:48:31.768100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.423 [2024-07-26 22:48:31.778842] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.423 [2024-07-26 22:48:31.778869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.423 [2024-07-26 22:48:31.789444] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.423 [2024-07-26 22:48:31.789470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.423 [2024-07-26 22:48:31.800073] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.423 [2024-07-26 22:48:31.800098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.423 [2024-07-26 22:48:31.811310] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.423 [2024-07-26 22:48:31.811337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.423 [2024-07-26 22:48:31.822165] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.423 [2024-07-26 22:48:31.822192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.423 [2024-07-26 22:48:31.833028] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.423 [2024-07-26 22:48:31.833055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.423 [2024-07-26 22:48:31.844003] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.423 [2024-07-26 22:48:31.844045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.423 [2024-07-26 22:48:31.854785] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.423 [2024-07-26 22:48:31.854813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.423 [2024-07-26 22:48:31.865890] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.423 [2024-07-26 22:48:31.865916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.423 [2024-07-26 22:48:31.876605] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.423 [2024-07-26 22:48:31.876632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.423 [2024-07-26 22:48:31.887418] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.423 [2024-07-26 22:48:31.887444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.423 [2024-07-26 22:48:31.898290] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.423 [2024-07-26 22:48:31.898316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.423 [2024-07-26 22:48:31.909393] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.423 [2024-07-26 22:48:31.909423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.423 [2024-07-26 22:48:31.922318] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.423 [2024-07-26 22:48:31.922346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.682 [2024-07-26 22:48:31.932032] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.682 [2024-07-26 22:48:31.932069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.682 [2024-07-26 22:48:31.943308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.682 [2024-07-26 22:48:31.943339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.682 [2024-07-26 22:48:31.954233] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.682 [2024-07-26 22:48:31.954260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.682 [2024-07-26 22:48:31.964801] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.682 [2024-07-26 22:48:31.964827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.682 [2024-07-26 22:48:31.975627] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.682 [2024-07-26 22:48:31.975653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.682 [2024-07-26 22:48:31.986258] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.682 [2024-07-26 22:48:31.986286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.682 [2024-07-26 22:48:31.996943] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.682 [2024-07-26 22:48:31.996969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.682 [2024-07-26 22:48:32.008033] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.682 [2024-07-26 22:48:32.008067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.682 [2024-07-26 22:48:32.018954] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.682 [2024-07-26 22:48:32.018981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.682 [2024-07-26 22:48:32.029815] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.682 [2024-07-26 22:48:32.029841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.682 [2024-07-26 22:48:32.042318] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.682 [2024-07-26 22:48:32.042346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.682 [2024-07-26 22:48:32.052255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.682 [2024-07-26 22:48:32.052281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.682 [2024-07-26 22:48:32.063939] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.682 [2024-07-26 22:48:32.063966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.682 [2024-07-26 22:48:32.074958] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.682 [2024-07-26 22:48:32.074984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.682 [2024-07-26 22:48:32.085530] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.682 [2024-07-26 22:48:32.085556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.682 [2024-07-26 22:48:32.096009] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.682 [2024-07-26 22:48:32.096035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.682 [2024-07-26 22:48:32.106759] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.682 [2024-07-26 22:48:32.106785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.682 [2024-07-26 22:48:32.119085] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.682 [2024-07-26 22:48:32.119111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.682 [2024-07-26 22:48:32.128786] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.682 [2024-07-26 22:48:32.128812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.682 [2024-07-26 22:48:32.140266] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.682 [2024-07-26 22:48:32.140293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.682 [2024-07-26 22:48:32.151207] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.682 [2024-07-26 22:48:32.151233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.682 [2024-07-26 22:48:32.161749] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.682 [2024-07-26 22:48:32.161775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.682 [2024-07-26 22:48:32.174391] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.682 [2024-07-26 22:48:32.174418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.682 [2024-07-26 22:48:32.184027] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.682 [2024-07-26 22:48:32.184079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.941 [2024-07-26 22:48:32.195096] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.941 [2024-07-26 22:48:32.195132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.941 [2024-07-26 22:48:32.206112] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.941 [2024-07-26 22:48:32.206139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.941 [2024-07-26 22:48:32.217066] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.941 [2024-07-26 22:48:32.217093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.941 [2024-07-26 22:48:32.227812] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.941 [2024-07-26 22:48:32.227839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.941 [2024-07-26 22:48:32.238921] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.941 [2024-07-26 22:48:32.238948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.941 [2024-07-26 22:48:32.249934] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.941 [2024-07-26 22:48:32.249961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.941 [2024-07-26 22:48:32.260781] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.941 [2024-07-26 22:48:32.260808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.941 [2024-07-26 22:48:32.271703] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.941 [2024-07-26 22:48:32.271730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.941 [2024-07-26 22:48:32.284179] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.941 [2024-07-26 22:48:32.284207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.941 [2024-07-26 22:48:32.293722] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.941 [2024-07-26 22:48:32.293749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.941 [2024-07-26 22:48:32.305176] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.941 [2024-07-26 22:48:32.305203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.941 [2024-07-26 22:48:32.318230] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.941 [2024-07-26 22:48:32.318257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.941 [2024-07-26 22:48:32.328469] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.941 [2024-07-26 22:48:32.328500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.942 [2024-07-26 22:48:32.339959] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.942 [2024-07-26 22:48:32.339986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.942 [2024-07-26 22:48:32.350765] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.942 [2024-07-26 22:48:32.350793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.942 [2024-07-26 22:48:32.361543] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.942 [2024-07-26 22:48:32.361569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.942 [2024-07-26 22:48:32.372432] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.942 [2024-07-26 22:48:32.372461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.942 [2024-07-26 22:48:32.383656] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.942 [2024-07-26 22:48:32.383683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.942 [2024-07-26 22:48:32.394437] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.942 [2024-07-26 22:48:32.394464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.942 [2024-07-26 22:48:32.407123] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.942 [2024-07-26 22:48:32.407150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.942 [2024-07-26 22:48:32.416944] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.942 [2024-07-26 22:48:32.416971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.942 [2024-07-26 22:48:32.428638] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.942 [2024-07-26 22:48:32.428665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.942 [2024-07-26 22:48:32.439331] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.942 [2024-07-26 22:48:32.439358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.200 [2024-07-26 22:48:32.450156] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.200 [2024-07-26 22:48:32.450182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-26 22:48:32.461097] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-26 22:48:32.461124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-26 22:48:32.471952] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-26 22:48:32.471978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-26 22:48:32.483213] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-26 22:48:32.483240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-26 22:48:32.494194] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-26 22:48:32.494220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-26 22:48:32.506964] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-26 22:48:32.507008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-26 22:48:32.516928] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-26 22:48:32.516954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-26 22:48:32.528288] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-26 22:48:32.528315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-26 22:48:32.539285] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-26 22:48:32.539311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-26 22:48:32.550195] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-26 22:48:32.550222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-26 22:48:32.563166] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-26 22:48:32.563193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-26 22:48:32.573695] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-26 22:48:32.573722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-26 22:48:32.584469] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-26 22:48:32.584508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-26 22:48:32.597226] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-26 22:48:32.597254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-26 22:48:32.606742] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-26 22:48:32.606772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-26 22:48:32.617972] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-26 22:48:32.617999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-26 22:48:32.629556] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-26 22:48:32.629583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-26 22:48:32.640452] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-26 22:48:32.640479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-26 22:48:32.651344] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-26 22:48:32.651371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-26 22:48:32.662092] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-26 22:48:32.662119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-26 22:48:32.673033] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-26 22:48:32.673067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-26 22:48:32.685644] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-26 22:48:32.685670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-26 22:48:32.697462] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-26 22:48:32.697489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.460 [2024-07-26 22:48:32.706699] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.460 [2024-07-26 22:48:32.706726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.460 [2024-07-26 22:48:32.718236] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.460 [2024-07-26 22:48:32.718263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.460 [2024-07-26 22:48:32.728925] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.460 [2024-07-26 22:48:32.728952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.460 [2024-07-26 22:48:32.739649] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.460 [2024-07-26 22:48:32.739676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.460 [2024-07-26 22:48:32.752318] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.460 [2024-07-26 22:48:32.752345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.460 [2024-07-26 22:48:32.761885] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.460 [2024-07-26 22:48:32.761912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.460 [2024-07-26 22:48:32.773478] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.460 [2024-07-26 22:48:32.773505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.460 [2024-07-26 22:48:32.784695] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.460 [2024-07-26 22:48:32.784722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.460 [2024-07-26 22:48:32.795659] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.460 [2024-07-26 22:48:32.795696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.460 [2024-07-26 22:48:32.806703] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.460 [2024-07-26 22:48:32.806730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.460 [2024-07-26 22:48:32.819011] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.460 [2024-07-26 22:48:32.819038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.460 [2024-07-26 22:48:32.829343] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.460 [2024-07-26 22:48:32.829370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.460 [2024-07-26 22:48:32.840900] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.460 [2024-07-26 22:48:32.840927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.460 [2024-07-26 22:48:32.851402] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.460 [2024-07-26 22:48:32.851429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.460 [2024-07-26 22:48:32.862057] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.460 [2024-07-26 22:48:32.862093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.460 [2024-07-26 22:48:32.873018] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.460 [2024-07-26 22:48:32.873045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.460 [2024-07-26 22:48:32.884211] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.460 [2024-07-26 22:48:32.884238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.460 [2024-07-26 22:48:32.895292] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.460 [2024-07-26 22:48:32.895319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.460 [2024-07-26 22:48:32.906052] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.460 [2024-07-26 22:48:32.906086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.460 [2024-07-26 22:48:32.918384] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.460 [2024-07-26 22:48:32.918412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.460 [2024-07-26 22:48:32.928824] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.460 [2024-07-26 22:48:32.928851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.460 [2024-07-26 22:48:32.939631] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.460 [2024-07-26 22:48:32.939673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.460 [2024-07-26 22:48:32.949932] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.460 [2024-07-26 22:48:32.949958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.460 [2024-07-26 22:48:32.960692] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.460 [2024-07-26 22:48:32.960720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.719 [2024-07-26 22:48:32.973263] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.719 [2024-07-26 22:48:32.973291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.719 [2024-07-26 22:48:32.983331] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.719 [2024-07-26 22:48:32.983379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.719 [2024-07-26 22:48:32.994807] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.719 [2024-07-26 22:48:32.994839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.719 [2024-07-26 22:48:33.007920] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.719 [2024-07-26 22:48:33.007957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.719 [2024-07-26 22:48:33.018466] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.719 [2024-07-26 22:48:33.018493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.719 [2024-07-26 22:48:33.029240] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.719 [2024-07-26 22:48:33.029268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.719 [2024-07-26 22:48:33.040229] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.719 [2024-07-26 22:48:33.040257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.719 [2024-07-26 22:48:33.051209] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.719 [2024-07-26 22:48:33.051237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.719 [2024-07-26 22:48:33.063317] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.719 [2024-07-26 22:48:33.063346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.719 [2024-07-26 22:48:33.072532] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.719 [2024-07-26 22:48:33.072559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.719 [2024-07-26 22:48:33.084363] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.719 [2024-07-26 22:48:33.084389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.719 [2024-07-26 22:48:33.095006] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.719 [2024-07-26 22:48:33.095033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.719 [2024-07-26 22:48:33.106044] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.719 [2024-07-26 22:48:33.106081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.719 [2024-07-26 22:48:33.116785] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.719 [2024-07-26 22:48:33.116811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.719 [2024-07-26 22:48:33.127413] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.719 [2024-07-26 22:48:33.127439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.719 [2024-07-26 22:48:33.138151] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.719 [2024-07-26 22:48:33.138179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.719 [2024-07-26 22:48:33.148859] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.719 [2024-07-26 22:48:33.148886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.719 [2024-07-26 22:48:33.160015] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.719 [2024-07-26 22:48:33.160056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.719 [2024-07-26 22:48:33.170636] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.719 [2024-07-26 22:48:33.170663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.719 [2024-07-26 22:48:33.183203] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.719 [2024-07-26 22:48:33.183230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.719 [2024-07-26 22:48:33.192254] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.719 [2024-07-26 22:48:33.192281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.719 [2024-07-26 22:48:33.203578] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.719 [2024-07-26 22:48:33.203605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.719 [2024-07-26 22:48:33.214196] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.719 [2024-07-26 22:48:33.214230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.976 [2024-07-26 22:48:33.225280] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.976 [2024-07-26 22:48:33.225307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.976 [2024-07-26 22:48:33.236342] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.976 [2024-07-26 22:48:33.236383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.976 [2024-07-26 22:48:33.249298] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.976 [2024-07-26 22:48:33.249324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.977 [2024-07-26 22:48:33.259357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.977 [2024-07-26 22:48:33.259383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.977 [2024-07-26 22:48:33.270932] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.977 [2024-07-26 22:48:33.270958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.977 [2024-07-26 22:48:33.283365] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.977 [2024-07-26 22:48:33.283392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.977 [2024-07-26 22:48:33.293108] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.977 [2024-07-26 22:48:33.293134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.977 [2024-07-26 22:48:33.304425] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.977 [2024-07-26 22:48:33.304451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.977 [2024-07-26 22:48:33.315439] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.977 [2024-07-26 22:48:33.315465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.977 [2024-07-26 22:48:33.326183] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.977 [2024-07-26 22:48:33.326209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.977 [2024-07-26 22:48:33.336976] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.977 [2024-07-26 22:48:33.337002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.977 [2024-07-26 22:48:33.347931] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.977 [2024-07-26 22:48:33.347957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.977 [2024-07-26 22:48:33.358885] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.977 [2024-07-26 22:48:33.358911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.977 [2024-07-26 22:48:33.369772] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.977 [2024-07-26 22:48:33.369798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.977 [2024-07-26 22:48:33.380753] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.977 [2024-07-26 22:48:33.380779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.977 [2024-07-26 22:48:33.391706] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.977 [2024-07-26 22:48:33.391732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.977 [2024-07-26 22:48:33.404416] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.977 [2024-07-26 22:48:33.404443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.977 [2024-07-26 22:48:33.414095] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.977 [2024-07-26 22:48:33.414121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.977 [2024-07-26 22:48:33.425343] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.977 [2024-07-26 22:48:33.425391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.977 [2024-07-26 22:48:33.436411] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.977 [2024-07-26 22:48:33.436437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.977 [2024-07-26 22:48:33.449265] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.977 [2024-07-26 22:48:33.449291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.977 [2024-07-26 22:48:33.459282] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.977 [2024-07-26 22:48:33.459308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.977 [2024-07-26 22:48:33.470473] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.977 [2024-07-26 22:48:33.470500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.234 [2024-07-26 22:48:33.481308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.234 [2024-07-26 22:48:33.481335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.234 [2024-07-26 22:48:33.492535] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.234 [2024-07-26 22:48:33.492561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.234 [2024-07-26 22:48:33.503618] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.235 [2024-07-26 22:48:33.503645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.235 [2024-07-26 22:48:33.514740] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.235 [2024-07-26 22:48:33.514766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.235 [2024-07-26 22:48:33.525822] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.235 [2024-07-26 22:48:33.525848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.235 [2024-07-26 22:48:33.536886] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.235 [2024-07-26 22:48:33.536913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.235 [2024-07-26 22:48:33.547538] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.235 [2024-07-26 22:48:33.547564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.235 [2024-07-26 22:48:33.560099] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.235 [2024-07-26 22:48:33.560125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.235 [2024-07-26 22:48:33.569618] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.235 [2024-07-26 22:48:33.569645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.235 [2024-07-26 22:48:33.580756] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.235 [2024-07-26 22:48:33.580797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.235 [2024-07-26 22:48:33.601355] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.235 [2024-07-26 22:48:33.601384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.235 [2024-07-26 22:48:33.614122] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.235 [2024-07-26 22:48:33.614152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.235 [2024-07-26 22:48:33.623439] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.235 [2024-07-26 22:48:33.623466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.235 [2024-07-26 22:48:33.635175] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.235 [2024-07-26 22:48:33.635209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.235 [2024-07-26 22:48:33.645640] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.235 [2024-07-26 22:48:33.645667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.235 [2024-07-26 22:48:33.657439] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.235 [2024-07-26 22:48:33.657465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.235 [2024-07-26 22:48:33.670321] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.235 [2024-07-26 22:48:33.670349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.235 [2024-07-26 22:48:33.679861] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.235 [2024-07-26 22:48:33.679888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.235 [2024-07-26 22:48:33.691121] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.235 [2024-07-26 22:48:33.691148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.235 [2024-07-26 22:48:33.701251] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.235 [2024-07-26 22:48:33.701292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.235 [2024-07-26 22:48:33.712570] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.235 [2024-07-26 22:48:33.712597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.235 [2024-07-26 22:48:33.723670] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.235 [2024-07-26 22:48:33.723697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.235 [2024-07-26 22:48:33.734551] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.235 [2024-07-26 22:48:33.734578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.493 [2024-07-26 22:48:33.745229] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.493 [2024-07-26 22:48:33.745256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.493 [2024-07-26 22:48:33.756327] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.493 [2024-07-26 22:48:33.756354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.493 [2024-07-26 22:48:33.767342] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.493 [2024-07-26 22:48:33.767369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.493 [2024-07-26 22:48:33.778386] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.493 [2024-07-26 22:48:33.778428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.493 [2024-07-26 22:48:33.790911] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.493 [2024-07-26 22:48:33.790937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.493 [2024-07-26 22:48:33.801321] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.493 [2024-07-26 22:48:33.801347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.493 [2024-07-26 22:48:33.811866] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.493 [2024-07-26 22:48:33.811892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.493 [2024-07-26 22:48:33.825017] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.493 [2024-07-26 22:48:33.825043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.493 [2024-07-26 22:48:33.835131] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.493 [2024-07-26 22:48:33.835157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.493 [2024-07-26 22:48:33.846401] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.493 [2024-07-26 22:48:33.846427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.493 [2024-07-26 22:48:33.857396] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.493 [2024-07-26 22:48:33.857423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.493 [2024-07-26 22:48:33.868032] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.493 [2024-07-26 22:48:33.868066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.493 [2024-07-26 22:48:33.878766] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.493 [2024-07-26 22:48:33.878793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.493 [2024-07-26 22:48:33.889544] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.493 [2024-07-26 22:48:33.889570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.493 [2024-07-26 22:48:33.900414] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.493 [2024-07-26 22:48:33.900440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.493 [2024-07-26 22:48:33.911280] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.493 [2024-07-26 22:48:33.911307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.493 [2024-07-26 22:48:33.922147] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.493 [2024-07-26 22:48:33.922173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.493 [2024-07-26 22:48:33.932883] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.493 [2024-07-26 22:48:33.932911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.493 [2024-07-26 22:48:33.943653] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.493 [2024-07-26 22:48:33.943681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.493 [2024-07-26 22:48:33.954387] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.493 [2024-07-26 22:48:33.954416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.493 [2024-07-26 22:48:33.967255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.493 [2024-07-26 22:48:33.967282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.493 [2024-07-26 22:48:33.977049] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.493 [2024-07-26 22:48:33.977084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.493 [2024-07-26 22:48:33.988519] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.493 [2024-07-26 22:48:33.988547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.751 [2024-07-26 22:48:33.999107] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.751 [2024-07-26 22:48:33.999134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.751 [2024-07-26 22:48:34.010143] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.751 [2024-07-26 22:48:34.010170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.751 [2024-07-26 22:48:34.023056] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.751 [2024-07-26 22:48:34.023093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.751 [2024-07-26 22:48:34.033474] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.751 [2024-07-26 22:48:34.033501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.751 [2024-07-26 22:48:34.043686] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.751 [2024-07-26 22:48:34.043713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.751 [2024-07-26 22:48:34.055563] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.751 [2024-07-26 22:48:34.055590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.751 [2024-07-26 22:48:34.066482] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.751 [2024-07-26 22:48:34.066509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.751 [2024-07-26 22:48:34.079474] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.751 [2024-07-26 22:48:34.079500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.751 [2024-07-26 22:48:34.089790] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.751 [2024-07-26 22:48:34.089817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.751 [2024-07-26 22:48:34.100468] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.751 [2024-07-26 22:48:34.100496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.751 [2024-07-26 22:48:34.111249] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.751 [2024-07-26 22:48:34.111277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.751 [2024-07-26 22:48:34.122223] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.751 [2024-07-26 22:48:34.122252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.751 [2024-07-26 22:48:34.133068] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.751 [2024-07-26 22:48:34.133096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.751 [2024-07-26 22:48:34.144106] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.751 [2024-07-26 22:48:34.144134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.751 [2024-07-26 22:48:34.155428] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.751 [2024-07-26 22:48:34.155455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.751 [2024-07-26 22:48:34.166744] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.751 [2024-07-26 22:48:34.166771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.751 [2024-07-26 22:48:34.177565] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.751 [2024-07-26 22:48:34.177594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.751 [2024-07-26 22:48:34.188584] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.751 [2024-07-26 22:48:34.188612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.751 [2024-07-26 22:48:34.201139] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.751 [2024-07-26 22:48:34.201166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.751 [2024-07-26 22:48:34.209166] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.751 [2024-07-26 22:48:34.209192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.751 00:18:41.751 Latency(us) 00:18:41.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.751 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:41.751 Nvme1n1 : 5.01 11655.51 91.06 0.00 0.00 10968.03 4830.25 24466.77 00:18:41.751 =================================================================================================================== 00:18:41.751 Total : 11655.51 91.06 0.00 0.00 10968.03 4830.25 24466.77 00:18:41.751 [2024-07-26 22:48:34.217223] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.751 [2024-07-26 22:48:34.217249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.752 [2024-07-26 22:48:34.225202] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.752 [2024-07-26 22:48:34.225241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.752 [2024-07-26 22:48:34.233308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.752 [2024-07-26 22:48:34.233365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.752 [2024-07-26 22:48:34.241306] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.752 [2024-07-26 22:48:34.241352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.752 [2024-07-26 22:48:34.249334] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.752 [2024-07-26 22:48:34.249384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.010 [2024-07-26 22:48:34.257369] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.010 [2024-07-26 22:48:34.257416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.010 [2024-07-26 22:48:34.265385] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.010 [2024-07-26 22:48:34.265435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.010 [2024-07-26 22:48:34.273398] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.010 [2024-07-26 22:48:34.273444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.010 [2024-07-26 22:48:34.281419] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.010 [2024-07-26 22:48:34.281467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.010 [2024-07-26 22:48:34.289466] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.010 [2024-07-26 22:48:34.289515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.010 [2024-07-26 22:48:34.297477] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.010 [2024-07-26 22:48:34.297529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.010 [2024-07-26 22:48:34.305500] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.010 [2024-07-26 22:48:34.305553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.010 [2024-07-26 22:48:34.313513] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.010 [2024-07-26 22:48:34.313564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.010 [2024-07-26 22:48:34.321532] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.010 [2024-07-26 22:48:34.321584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.010 [2024-07-26 22:48:34.329554] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.010 [2024-07-26 22:48:34.329601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.010 [2024-07-26 22:48:34.337570] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.010 [2024-07-26 22:48:34.337620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.010 [2024-07-26 22:48:34.345563] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.010 [2024-07-26 22:48:34.345597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.010 [2024-07-26 22:48:34.353569] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.010 [2024-07-26 22:48:34.353598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.010 [2024-07-26 22:48:34.361631] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.010 [2024-07-26 22:48:34.361677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.010 [2024-07-26 22:48:34.369672] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.010 [2024-07-26 22:48:34.369726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.010 [2024-07-26 22:48:34.377699] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.010 [2024-07-26 22:48:34.377764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.010 [2024-07-26 22:48:34.385652] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.010 [2024-07-26 22:48:34.385679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.010 [2024-07-26 22:48:34.393699] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.010 [2024-07-26 22:48:34.393739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.010 [2024-07-26 22:48:34.401752] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.010 [2024-07-26 22:48:34.401800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.010 [2024-07-26 22:48:34.409774] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.010 [2024-07-26 22:48:34.409826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.010 [2024-07-26 22:48:34.417718] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.010 [2024-07-26 22:48:34.417739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.010 [2024-07-26 22:48:34.425740] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.010 [2024-07-26 22:48:34.425759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.010 [2024-07-26 22:48:34.433758] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.010 [2024-07-26 22:48:34.433777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3537511) - No such process 00:18:42.010 22:48:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3537511 00:18:42.010 22:48:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:42.010 22:48:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.010 22:48:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:42.010 22:48:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.010 22:48:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:42.010 22:48:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.010 22:48:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:42.010 delay0 00:18:42.010 22:48:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.010 22:48:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:42.010 22:48:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.010 22:48:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:42.010 22:48:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.010 22:48:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:42.010 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.268 [2024-07-26 22:48:34.597252] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:48.825 Initializing NVMe Controllers 00:18:48.825 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:48.825 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:48.825 Initialization complete. Launching workers. 00:18:48.825 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 150 00:18:48.825 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 437, failed to submit 33 00:18:48.825 success 254, unsuccess 183, failed 0 00:18:48.825 22:48:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:48.825 22:48:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:48.825 22:48:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:48.825 22:48:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:48.825 22:48:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:48.825 22:48:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:48.825 22:48:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:48.825 22:48:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:48.825 rmmod nvme_tcp 00:18:48.825 rmmod nvme_fabrics 00:18:48.825 rmmod nvme_keyring 00:18:48.825 22:48:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:48.825 22:48:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:48.825 22:48:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:48.825 22:48:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3536178 ']' 00:18:48.825 22:48:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3536178 00:18:48.825 22:48:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 3536178 ']' 00:18:48.825 22:48:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 3536178 00:18:48.825 22:48:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:18:48.825 22:48:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:48.825 22:48:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3536178 00:18:48.825 22:48:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:48.825 22:48:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:48.825 22:48:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3536178' 00:18:48.825 killing process with pid 3536178 00:18:48.825 22:48:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 3536178 00:18:48.825 22:48:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 3536178 00:18:48.825 22:48:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:48.825 22:48:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:48.825 22:48:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:48.825 22:48:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:48.825 22:48:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:48.825 22:48:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.825 22:48:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:48.825 22:48:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.728 22:48:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:50.728 00:18:50.728 real 0m27.504s 00:18:50.728 user 0m40.727s 00:18:50.728 sys 0m8.330s 00:18:50.728 22:48:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:50.728 22:48:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:50.728 ************************************ 00:18:50.728 END TEST nvmf_zcopy 00:18:50.728 ************************************ 00:18:50.728 22:48:43 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:50.728 22:48:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:50.728 22:48:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:50.728 22:48:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:50.728 ************************************ 00:18:50.728 START TEST nvmf_nmic 00:18:50.728 ************************************ 00:18:50.728 22:48:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:50.728 * Looking for test storage... 00:18:50.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:50.728 22:48:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:50.728 22:48:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:50.728 22:48:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:50.728 22:48:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:50.728 22:48:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:50.728 22:48:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:50.728 22:48:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:50.728 22:48:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:50.728 22:48:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:50.728 22:48:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:50.728 22:48:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:50.728 22:48:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:50.728 22:48:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:50.728 22:48:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:50.728 22:48:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:50.728 22:48:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:50.728 22:48:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:50.728 22:48:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:50.728 22:48:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:50.728 22:48:43 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:50.728 22:48:43 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:50.728 22:48:43 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:50.729 22:48:43 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.729 22:48:43 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.729 22:48:43 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.729 22:48:43 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:50.729 22:48:43 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.729 22:48:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:50.729 22:48:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:50.729 22:48:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:50.729 22:48:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:50.729 22:48:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:50.729 22:48:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:50.729 22:48:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:50.729 22:48:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:50.729 22:48:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:50.729 22:48:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:50.729 22:48:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:50.729 22:48:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:50.729 22:48:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:50.729 22:48:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:50.729 22:48:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:50.729 22:48:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:50.729 22:48:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:50.729 22:48:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.729 22:48:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:50.729 22:48:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.729 22:48:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:50.729 22:48:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:50.729 22:48:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:18:50.729 22:48:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:53.259 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:53.259 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:53.259 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:53.259 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:53.259 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:53.259 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:18:53.259 00:18:53.259 --- 10.0.0.2 ping statistics --- 00:18:53.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.259 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:53.259 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:53.259 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:18:53.259 00:18:53.259 --- 10.0.0.1 ping statistics --- 00:18:53.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.259 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:53.259 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3540761 00:18:53.260 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:53.260 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3540761 00:18:53.260 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 3540761 ']' 00:18:53.260 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.260 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:53.260 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.260 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:53.260 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:53.260 [2024-07-26 22:48:45.455125] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:18:53.260 [2024-07-26 22:48:45.455218] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.260 EAL: No free 2048 kB hugepages reported on node 1 00:18:53.260 [2024-07-26 22:48:45.527701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:53.260 [2024-07-26 22:48:45.615111] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.260 [2024-07-26 22:48:45.615165] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.260 [2024-07-26 22:48:45.615185] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.260 [2024-07-26 22:48:45.615203] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.260 [2024-07-26 22:48:45.615218] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.260 [2024-07-26 22:48:45.615304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.260 [2024-07-26 22:48:45.615378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:53.260 [2024-07-26 22:48:45.615445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:53.260 [2024-07-26 22:48:45.615450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.260 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:53.260 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:18:53.260 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:53.260 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:53.260 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:53.518 22:48:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.518 22:48:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:53.518 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.518 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:53.518 [2024-07-26 22:48:45.766724] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:53.518 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.518 22:48:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:53.518 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.518 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:53.518 Malloc0 00:18:53.519 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.519 22:48:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:53.519 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.519 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:53.519 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.519 22:48:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:53.519 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.519 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:53.519 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.519 22:48:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:53.519 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.519 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:53.519 [2024-07-26 22:48:45.818164] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:53.519 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.519 22:48:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:53.519 test case1: single bdev can't be used in multiple subsystems 00:18:53.519 22:48:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:53.519 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.519 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:53.519 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.519 22:48:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:53.519 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.519 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:53.519 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.519 22:48:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:53.519 22:48:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:53.519 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.519 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:53.519 [2024-07-26 22:48:45.841961] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:53.519 [2024-07-26 22:48:45.841991] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:53.519 [2024-07-26 22:48:45.842013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.519 request: 00:18:53.519 { 00:18:53.519 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:53.519 "namespace": { 00:18:53.519 "bdev_name": "Malloc0", 00:18:53.519 "no_auto_visible": false 00:18:53.519 }, 00:18:53.519 "method": "nvmf_subsystem_add_ns", 00:18:53.519 "req_id": 1 00:18:53.519 } 00:18:53.519 Got JSON-RPC error response 00:18:53.519 response: 00:18:53.519 { 00:18:53.519 "code": -32602, 00:18:53.519 "message": "Invalid parameters" 00:18:53.519 } 00:18:53.519 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:53.519 22:48:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:53.519 22:48:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:53.519 22:48:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:53.519 Adding namespace failed - expected result. 00:18:53.519 22:48:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:53.519 test case2: host connect to nvmf target in multiple paths 00:18:53.519 22:48:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:53.519 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.519 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:53.519 [2024-07-26 22:48:45.850096] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:53.519 22:48:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.519 22:48:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:54.085 22:48:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:54.650 22:48:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:54.650 22:48:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:18:54.650 22:48:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:54.650 22:48:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:54.650 22:48:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:18:57.204 22:48:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:57.204 22:48:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:57.204 22:48:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:57.204 22:48:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:57.204 22:48:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:57.204 22:48:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:18:57.204 22:48:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:57.204 [global] 00:18:57.204 thread=1 00:18:57.204 invalidate=1 00:18:57.204 rw=write 00:18:57.204 time_based=1 00:18:57.204 runtime=1 00:18:57.204 ioengine=libaio 00:18:57.204 direct=1 00:18:57.204 bs=4096 00:18:57.204 iodepth=1 00:18:57.204 norandommap=0 00:18:57.204 numjobs=1 00:18:57.204 00:18:57.204 verify_dump=1 00:18:57.204 verify_backlog=512 00:18:57.204 verify_state_save=0 00:18:57.204 do_verify=1 00:18:57.204 verify=crc32c-intel 00:18:57.204 [job0] 00:18:57.204 filename=/dev/nvme0n1 00:18:57.204 Could not set queue depth (nvme0n1) 00:18:57.204 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:57.204 fio-3.35 00:18:57.204 Starting 1 thread 00:18:58.138 00:18:58.138 job0: (groupid=0, jobs=1): err= 0: pid=3541399: Fri Jul 26 22:48:50 2024 00:18:58.138 read: IOPS=1234, BW=4939KiB/s (5058kB/s)(4944KiB/1001msec) 00:18:58.138 slat (nsec): min=5349, max=55399, avg=12783.87, stdev=6054.53 00:18:58.138 clat (usec): min=350, max=44923, avg=494.23, stdev=1716.31 00:18:58.138 lat (usec): min=356, max=44968, avg=507.01, stdev=1717.46 00:18:58.138 clat percentiles (usec): 00:18:58.138 | 1.00th=[ 359], 5.00th=[ 371], 10.00th=[ 383], 20.00th=[ 396], 00:18:58.139 | 30.00th=[ 408], 40.00th=[ 416], 50.00th=[ 424], 60.00th=[ 437], 00:18:58.139 | 70.00th=[ 445], 80.00th=[ 453], 90.00th=[ 465], 95.00th=[ 474], 00:18:58.139 | 99.00th=[ 510], 99.50th=[ 545], 99.90th=[41157], 99.95th=[44827], 00:18:58.139 | 99.99th=[44827] 00:18:58.139 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:18:58.139 slat (nsec): min=6595, max=71677, avg=13909.05, stdev=6849.98 00:18:58.139 clat (usec): min=175, max=357, avg=221.73, stdev=23.45 00:18:58.139 lat (usec): min=182, max=379, avg=235.64, stdev=28.18 00:18:58.139 clat percentiles (usec): 00:18:58.139 | 1.00th=[ 182], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 200], 00:18:58.139 | 30.00th=[ 208], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 227], 00:18:58.139 | 70.00th=[ 233], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 260], 00:18:58.139 | 99.00th=[ 281], 99.50th=[ 289], 99.90th=[ 351], 99.95th=[ 359], 00:18:58.139 | 99.99th=[ 359] 00:18:58.139 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:18:58.139 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:58.139 lat (usec) : 250=48.70%, 500=50.79%, 750=0.40% 00:18:58.139 lat (msec) : 2=0.04%, 50=0.07% 00:18:58.139 cpu : usr=2.00%, sys=6.00%, ctx=2772, majf=0, minf=2 00:18:58.139 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:58.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.139 issued rwts: total=1236,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.139 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:58.139 00:18:58.139 Run status group 0 (all jobs): 00:18:58.139 READ: bw=4939KiB/s (5058kB/s), 4939KiB/s-4939KiB/s (5058kB/s-5058kB/s), io=4944KiB (5063kB), run=1001-1001msec 00:18:58.139 WRITE: bw=6138KiB/s (6285kB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=6144KiB (6291kB), run=1001-1001msec 00:18:58.139 00:18:58.139 Disk stats (read/write): 00:18:58.139 nvme0n1: ios=1074/1503, merge=0/0, ticks=542/321, in_queue=863, util=92.18% 00:18:58.139 22:48:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:58.139 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:58.139 22:48:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:58.139 22:48:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:18:58.139 22:48:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:58.139 22:48:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:58.139 22:48:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:58.139 22:48:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:58.139 22:48:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:18:58.139 22:48:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:58.139 22:48:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:58.139 22:48:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:58.139 22:48:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:58.139 22:48:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:58.139 22:48:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:58.139 22:48:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:58.139 22:48:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:58.139 rmmod nvme_tcp 00:18:58.139 rmmod nvme_fabrics 00:18:58.139 rmmod nvme_keyring 00:18:58.397 22:48:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:58.397 22:48:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:58.397 22:48:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:58.397 22:48:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3540761 ']' 00:18:58.397 22:48:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3540761 00:18:58.397 22:48:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 3540761 ']' 00:18:58.397 22:48:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 3540761 00:18:58.397 22:48:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:18:58.397 22:48:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:58.397 22:48:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3540761 00:18:58.397 22:48:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:58.397 22:48:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:58.397 22:48:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3540761' 00:18:58.397 killing process with pid 3540761 00:18:58.397 22:48:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 3540761 00:18:58.397 22:48:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 3540761 00:18:58.656 22:48:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:58.656 22:48:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:58.656 22:48:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:58.656 22:48:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:58.656 22:48:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:58.656 22:48:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.656 22:48:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:58.656 22:48:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.559 22:48:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:00.559 00:19:00.559 real 0m9.854s 00:19:00.559 user 0m21.276s 00:19:00.559 sys 0m2.689s 00:19:00.559 22:48:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:00.559 22:48:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:00.559 ************************************ 00:19:00.559 END TEST nvmf_nmic 00:19:00.559 ************************************ 00:19:00.559 22:48:52 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:00.559 22:48:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:00.559 22:48:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:00.559 22:48:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:00.559 ************************************ 00:19:00.559 START TEST nvmf_fio_target 00:19:00.559 ************************************ 00:19:00.559 22:48:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:00.818 * Looking for test storage... 00:19:00.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:00.818 22:48:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:02.717 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:02.717 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:02.717 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:02.717 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:02.717 22:48:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:02.717 22:48:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:02.717 22:48:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:02.717 22:48:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:02.717 22:48:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:02.717 22:48:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:02.717 22:48:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:02.717 22:48:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:02.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:02.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:19:02.717 00:19:02.717 --- 10.0.0.2 ping statistics --- 00:19:02.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:02.717 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:19:02.717 22:48:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:02.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:02.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:19:02.717 00:19:02.718 --- 10.0.0.1 ping statistics --- 00:19:02.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:02.718 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:19:02.718 22:48:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:02.718 22:48:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:19:02.718 22:48:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:02.718 22:48:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:02.718 22:48:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:02.718 22:48:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:02.718 22:48:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:02.718 22:48:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:02.718 22:48:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:02.718 22:48:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:02.718 22:48:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:02.718 22:48:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:02.718 22:48:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.718 22:48:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3543469 00:19:02.718 22:48:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:02.718 22:48:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3543469 00:19:02.718 22:48:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 3543469 ']' 00:19:02.718 22:48:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.718 22:48:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:02.718 22:48:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.718 22:48:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:02.718 22:48:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.718 [2024-07-26 22:48:55.160243] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:19:02.718 [2024-07-26 22:48:55.160329] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:02.718 EAL: No free 2048 kB hugepages reported on node 1 00:19:02.976 [2024-07-26 22:48:55.226511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:02.976 [2024-07-26 22:48:55.318276] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:02.976 [2024-07-26 22:48:55.318334] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:02.976 [2024-07-26 22:48:55.318363] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:02.976 [2024-07-26 22:48:55.318374] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:02.976 [2024-07-26 22:48:55.318383] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:02.976 [2024-07-26 22:48:55.318478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:02.976 [2024-07-26 22:48:55.318533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:02.976 [2024-07-26 22:48:55.318657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:02.976 [2024-07-26 22:48:55.318660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.976 22:48:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:02.976 22:48:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:19:02.976 22:48:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:02.976 22:48:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:02.976 22:48:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.976 22:48:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:02.976 22:48:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:03.234 [2024-07-26 22:48:55.703616] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:03.234 22:48:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:03.799 22:48:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:03.799 22:48:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:03.799 22:48:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:03.799 22:48:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:04.056 22:48:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:04.056 22:48:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:04.313 22:48:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:04.313 22:48:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:04.570 22:48:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:04.828 22:48:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:04.828 22:48:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:05.086 22:48:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:05.086 22:48:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:05.343 22:48:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:05.344 22:48:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:05.601 22:48:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:05.858 22:48:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:05.858 22:48:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:06.115 22:48:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:06.115 22:48:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:06.372 22:48:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:06.629 [2024-07-26 22:48:59.038202] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:06.629 22:48:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:06.886 22:48:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:07.143 22:48:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:07.707 22:49:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:07.707 22:49:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:19:07.707 22:49:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:19:07.707 22:49:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:19:07.707 22:49:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:19:07.707 22:49:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:19:10.233 22:49:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:19:10.233 22:49:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:19:10.233 22:49:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:19:10.233 22:49:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:19:10.233 22:49:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:19:10.233 22:49:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:19:10.233 22:49:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:10.233 [global] 00:19:10.233 thread=1 00:19:10.233 invalidate=1 00:19:10.233 rw=write 00:19:10.233 time_based=1 00:19:10.233 runtime=1 00:19:10.233 ioengine=libaio 00:19:10.233 direct=1 00:19:10.233 bs=4096 00:19:10.233 iodepth=1 00:19:10.233 norandommap=0 00:19:10.233 numjobs=1 00:19:10.233 00:19:10.233 verify_dump=1 00:19:10.233 verify_backlog=512 00:19:10.233 verify_state_save=0 00:19:10.233 do_verify=1 00:19:10.233 verify=crc32c-intel 00:19:10.233 [job0] 00:19:10.233 filename=/dev/nvme0n1 00:19:10.233 [job1] 00:19:10.233 filename=/dev/nvme0n2 00:19:10.233 [job2] 00:19:10.233 filename=/dev/nvme0n3 00:19:10.233 [job3] 00:19:10.233 filename=/dev/nvme0n4 00:19:10.233 Could not set queue depth (nvme0n1) 00:19:10.233 Could not set queue depth (nvme0n2) 00:19:10.233 Could not set queue depth (nvme0n3) 00:19:10.233 Could not set queue depth (nvme0n4) 00:19:10.233 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:10.233 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:10.233 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:10.233 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:10.233 fio-3.35 00:19:10.233 Starting 4 threads 00:19:11.165 00:19:11.165 job0: (groupid=0, jobs=1): err= 0: pid=3544414: Fri Jul 26 22:49:03 2024 00:19:11.166 read: IOPS=1014, BW=4059KiB/s (4157kB/s)(4108KiB/1012msec) 00:19:11.166 slat (nsec): min=5230, max=75223, avg=23291.36, stdev=11926.66 00:19:11.166 clat (usec): min=296, max=41413, avg=534.45, stdev=2201.18 00:19:11.166 lat (usec): min=302, max=41421, avg=557.75, stdev=2200.80 00:19:11.166 clat percentiles (usec): 00:19:11.166 | 1.00th=[ 302], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 338], 00:19:11.166 | 30.00th=[ 371], 40.00th=[ 392], 50.00th=[ 412], 60.00th=[ 437], 00:19:11.166 | 70.00th=[ 457], 80.00th=[ 486], 90.00th=[ 519], 95.00th=[ 537], 00:19:11.166 | 99.00th=[ 562], 99.50th=[ 586], 99.90th=[41157], 99.95th=[41157], 00:19:11.166 | 99.99th=[41157] 00:19:11.166 write: IOPS=1517, BW=6071KiB/s (6217kB/s)(6144KiB/1012msec); 0 zone resets 00:19:11.166 slat (nsec): min=5973, max=73718, avg=15859.49, stdev=10832.51 00:19:11.166 clat (usec): min=181, max=920, avg=260.37, stdev=65.19 00:19:11.166 lat (usec): min=189, max=929, avg=276.23, stdev=70.38 00:19:11.166 clat percentiles (usec): 00:19:11.166 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 206], 00:19:11.166 | 30.00th=[ 217], 40.00th=[ 233], 50.00th=[ 247], 60.00th=[ 255], 00:19:11.166 | 70.00th=[ 277], 80.00th=[ 314], 90.00th=[ 351], 95.00th=[ 388], 00:19:11.166 | 99.00th=[ 453], 99.50th=[ 469], 99.90th=[ 734], 99.95th=[ 922], 00:19:11.166 | 99.99th=[ 922] 00:19:11.166 bw ( KiB/s): min= 4096, max= 8192, per=33.87%, avg=6144.00, stdev=2896.31, samples=2 00:19:11.166 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:19:11.166 lat (usec) : 250=33.71%, 500=59.46%, 750=6.67%, 1000=0.04% 00:19:11.166 lat (msec) : 50=0.12% 00:19:11.166 cpu : usr=2.87%, sys=4.65%, ctx=2567, majf=0, minf=1 00:19:11.166 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:11.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.166 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.166 issued rwts: total=1027,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.166 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:11.166 job1: (groupid=0, jobs=1): err= 0: pid=3544415: Fri Jul 26 22:49:03 2024 00:19:11.166 read: IOPS=1055, BW=4224KiB/s (4325kB/s)(4228KiB/1001msec) 00:19:11.166 slat (nsec): min=5815, max=61882, avg=20663.45, stdev=10744.48 00:19:11.166 clat (usec): min=348, max=644, avg=467.99, stdev=47.55 00:19:11.166 lat (usec): min=355, max=667, avg=488.65, stdev=52.94 00:19:11.166 clat percentiles (usec): 00:19:11.166 | 1.00th=[ 359], 5.00th=[ 388], 10.00th=[ 400], 20.00th=[ 429], 00:19:11.166 | 30.00th=[ 445], 40.00th=[ 461], 50.00th=[ 469], 60.00th=[ 482], 00:19:11.166 | 70.00th=[ 494], 80.00th=[ 506], 90.00th=[ 529], 95.00th=[ 545], 00:19:11.166 | 99.00th=[ 570], 99.50th=[ 578], 99.90th=[ 635], 99.95th=[ 644], 00:19:11.166 | 99.99th=[ 644] 00:19:11.166 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:11.166 slat (nsec): min=6872, max=78408, avg=16914.49, stdev=11219.72 00:19:11.166 clat (usec): min=175, max=496, avg=289.29, stdev=85.14 00:19:11.166 lat (usec): min=183, max=560, avg=306.21, stdev=92.23 00:19:11.166 clat percentiles (usec): 00:19:11.166 | 1.00th=[ 180], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 190], 00:19:11.166 | 30.00th=[ 198], 40.00th=[ 273], 50.00th=[ 310], 60.00th=[ 322], 00:19:11.166 | 70.00th=[ 343], 80.00th=[ 367], 90.00th=[ 404], 95.00th=[ 433], 00:19:11.166 | 99.00th=[ 465], 99.50th=[ 478], 99.90th=[ 490], 99.95th=[ 498], 00:19:11.166 | 99.99th=[ 498] 00:19:11.166 bw ( KiB/s): min= 5192, max= 5192, per=28.62%, avg=5192.00, stdev= 0.00, samples=1 00:19:11.166 iops : min= 1298, max= 1298, avg=1298.00, stdev= 0.00, samples=1 00:19:11.166 lat (usec) : 250=22.83%, 500=66.80%, 750=10.37% 00:19:11.166 cpu : usr=3.30%, sys=5.50%, ctx=2595, majf=0, minf=2 00:19:11.166 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:11.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.166 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.166 issued rwts: total=1057,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.166 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:11.166 job2: (groupid=0, jobs=1): err= 0: pid=3544418: Fri Jul 26 22:49:03 2024 00:19:11.166 read: IOPS=24, BW=98.4KiB/s (101kB/s)(100KiB/1016msec) 00:19:11.166 slat (nsec): min=6523, max=43077, avg=18471.36, stdev=8058.75 00:19:11.166 clat (usec): min=442, max=41099, avg=34479.00, stdev=15154.30 00:19:11.166 lat (usec): min=456, max=41105, avg=34497.47, stdev=15155.81 00:19:11.166 clat percentiles (usec): 00:19:11.166 | 1.00th=[ 441], 5.00th=[ 445], 10.00th=[ 461], 20.00th=[40633], 00:19:11.166 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:11.166 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:11.166 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:11.166 | 99.99th=[41157] 00:19:11.166 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:19:11.166 slat (nsec): min=7208, max=74031, avg=14474.93, stdev=7660.37 00:19:11.166 clat (usec): min=220, max=488, avg=282.32, stdev=39.95 00:19:11.166 lat (usec): min=233, max=496, avg=296.79, stdev=41.06 00:19:11.166 clat percentiles (usec): 00:19:11.166 | 1.00th=[ 227], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 251], 00:19:11.166 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 277], 00:19:11.166 | 70.00th=[ 293], 80.00th=[ 314], 90.00th=[ 338], 95.00th=[ 363], 00:19:11.166 | 99.00th=[ 400], 99.50th=[ 424], 99.90th=[ 490], 99.95th=[ 490], 00:19:11.166 | 99.99th=[ 490] 00:19:11.166 bw ( KiB/s): min= 4096, max= 4096, per=22.58%, avg=4096.00, stdev= 0.00, samples=1 00:19:11.166 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:11.166 lat (usec) : 250=17.32%, 500=78.77% 00:19:11.166 lat (msec) : 50=3.91% 00:19:11.166 cpu : usr=0.49%, sys=0.59%, ctx=538, majf=0, minf=1 00:19:11.166 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:11.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.166 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.166 issued rwts: total=25,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.166 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:11.166 job3: (groupid=0, jobs=1): err= 0: pid=3544419: Fri Jul 26 22:49:03 2024 00:19:11.166 read: IOPS=933, BW=3733KiB/s (3822kB/s)(3744KiB/1003msec) 00:19:11.166 slat (nsec): min=5490, max=69862, avg=26739.50, stdev=11480.53 00:19:11.166 clat (usec): min=323, max=41131, avg=717.41, stdev=2958.24 00:19:11.166 lat (usec): min=329, max=41146, avg=744.15, stdev=2957.44 00:19:11.166 clat percentiles (usec): 00:19:11.166 | 1.00th=[ 343], 5.00th=[ 392], 10.00th=[ 416], 20.00th=[ 441], 00:19:11.166 | 30.00th=[ 457], 40.00th=[ 469], 50.00th=[ 478], 60.00th=[ 490], 00:19:11.166 | 70.00th=[ 502], 80.00th=[ 519], 90.00th=[ 545], 95.00th=[ 586], 00:19:11.166 | 99.00th=[ 1221], 99.50th=[39060], 99.90th=[41157], 99.95th=[41157], 00:19:11.166 | 99.99th=[41157] 00:19:11.166 write: IOPS=1020, BW=4084KiB/s (4182kB/s)(4096KiB/1003msec); 0 zone resets 00:19:11.166 slat (nsec): min=6568, max=68071, avg=15828.91, stdev=7936.73 00:19:11.166 clat (usec): min=204, max=477, avg=271.58, stdev=53.47 00:19:11.166 lat (usec): min=214, max=487, avg=287.41, stdev=54.45 00:19:11.166 clat percentiles (usec): 00:19:11.166 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 221], 20.00th=[ 229], 00:19:11.166 | 30.00th=[ 235], 40.00th=[ 243], 50.00th=[ 253], 60.00th=[ 265], 00:19:11.166 | 70.00th=[ 285], 80.00th=[ 318], 90.00th=[ 359], 95.00th=[ 383], 00:19:11.166 | 99.00th=[ 429], 99.50th=[ 441], 99.90th=[ 469], 99.95th=[ 478], 00:19:11.166 | 99.99th=[ 478] 00:19:11.166 bw ( KiB/s): min= 3416, max= 4776, per=22.58%, avg=4096.00, stdev=961.67, samples=2 00:19:11.166 iops : min= 854, max= 1194, avg=1024.00, stdev=240.42, samples=2 00:19:11.166 lat (usec) : 250=25.00%, 500=59.74%, 750=14.64%, 1000=0.10% 00:19:11.166 lat (msec) : 2=0.10%, 4=0.10%, 20=0.05%, 50=0.26% 00:19:11.166 cpu : usr=2.20%, sys=4.39%, ctx=1960, majf=0, minf=1 00:19:11.166 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:11.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.166 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.166 issued rwts: total=936,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.166 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:11.166 00:19:11.166 Run status group 0 (all jobs): 00:19:11.166 READ: bw=11.7MiB/s (12.3MB/s), 98.4KiB/s-4224KiB/s (101kB/s-4325kB/s), io=11.9MiB (12.5MB), run=1001-1016msec 00:19:11.166 WRITE: bw=17.7MiB/s (18.6MB/s), 2016KiB/s-6138KiB/s (2064kB/s-6285kB/s), io=18.0MiB (18.9MB), run=1001-1016msec 00:19:11.166 00:19:11.166 Disk stats (read/write): 00:19:11.166 nvme0n1: ios=1050/1344, merge=0/0, ticks=1368/352, in_queue=1720, util=97.60% 00:19:11.166 nvme0n2: ios=981/1024, merge=0/0, ticks=1028/321, in_queue=1349, util=97.86% 00:19:11.166 nvme0n3: ios=62/512, merge=0/0, ticks=1587/142, in_queue=1729, util=98.43% 00:19:11.166 nvme0n4: ios=931/1024, merge=0/0, ticks=490/275, in_queue=765, util=89.55% 00:19:11.166 22:49:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:11.166 [global] 00:19:11.166 thread=1 00:19:11.166 invalidate=1 00:19:11.166 rw=randwrite 00:19:11.166 time_based=1 00:19:11.166 runtime=1 00:19:11.166 ioengine=libaio 00:19:11.166 direct=1 00:19:11.166 bs=4096 00:19:11.166 iodepth=1 00:19:11.166 norandommap=0 00:19:11.166 numjobs=1 00:19:11.166 00:19:11.166 verify_dump=1 00:19:11.166 verify_backlog=512 00:19:11.166 verify_state_save=0 00:19:11.166 do_verify=1 00:19:11.166 verify=crc32c-intel 00:19:11.166 [job0] 00:19:11.166 filename=/dev/nvme0n1 00:19:11.166 [job1] 00:19:11.166 filename=/dev/nvme0n2 00:19:11.166 [job2] 00:19:11.166 filename=/dev/nvme0n3 00:19:11.166 [job3] 00:19:11.166 filename=/dev/nvme0n4 00:19:11.166 Could not set queue depth (nvme0n1) 00:19:11.166 Could not set queue depth (nvme0n2) 00:19:11.166 Could not set queue depth (nvme0n3) 00:19:11.166 Could not set queue depth (nvme0n4) 00:19:11.423 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:11.423 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:11.423 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:11.423 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:11.423 fio-3.35 00:19:11.423 Starting 4 threads 00:19:12.795 00:19:12.795 job0: (groupid=0, jobs=1): err= 0: pid=3544760: Fri Jul 26 22:49:05 2024 00:19:12.795 read: IOPS=21, BW=87.6KiB/s (89.7kB/s)(88.0KiB/1005msec) 00:19:12.795 slat (nsec): min=8028, max=43951, avg=23246.59, stdev=10902.65 00:19:12.795 clat (usec): min=10765, max=41157, avg=39601.81, stdev=6440.77 00:19:12.795 lat (usec): min=10783, max=41165, avg=39625.06, stdev=6441.97 00:19:12.795 clat percentiles (usec): 00:19:12.795 | 1.00th=[10814], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:12.795 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:12.795 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:12.795 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:12.795 | 99.99th=[41157] 00:19:12.795 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:19:12.795 slat (nsec): min=6621, max=46153, avg=11798.13, stdev=6194.81 00:19:12.795 clat (usec): min=194, max=431, avg=236.92, stdev=30.23 00:19:12.795 lat (usec): min=203, max=451, avg=248.71, stdev=31.72 00:19:12.795 clat percentiles (usec): 00:19:12.795 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 217], 00:19:12.795 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 235], 00:19:12.795 | 70.00th=[ 241], 80.00th=[ 253], 90.00th=[ 277], 95.00th=[ 297], 00:19:12.795 | 99.00th=[ 347], 99.50th=[ 371], 99.90th=[ 433], 99.95th=[ 433], 00:19:12.795 | 99.99th=[ 433] 00:19:12.795 bw ( KiB/s): min= 4096, max= 4096, per=33.87%, avg=4096.00, stdev= 0.00, samples=1 00:19:12.795 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:12.795 lat (usec) : 250=75.09%, 500=20.79% 00:19:12.795 lat (msec) : 20=0.19%, 50=3.93% 00:19:12.795 cpu : usr=0.10%, sys=0.90%, ctx=538, majf=0, minf=2 00:19:12.795 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:12.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.795 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.795 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.795 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:12.795 job1: (groupid=0, jobs=1): err= 0: pid=3544761: Fri Jul 26 22:49:05 2024 00:19:12.795 read: IOPS=19, BW=78.7KiB/s (80.6kB/s)(80.0KiB/1016msec) 00:19:12.795 slat (nsec): min=7985, max=35934, avg=23536.60, stdev=10471.40 00:19:12.795 clat (usec): min=40912, max=41141, avg=40982.07, stdev=56.33 00:19:12.795 lat (usec): min=40948, max=41149, avg=41005.61, stdev=50.97 00:19:12.795 clat percentiles (usec): 00:19:12.795 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:12.795 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:12.795 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:12.795 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:12.795 | 99.99th=[41157] 00:19:12.795 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:19:12.795 slat (nsec): min=7918, max=52483, avg=16787.56, stdev=9607.38 00:19:12.795 clat (usec): min=194, max=761, avg=354.05, stdev=161.37 00:19:12.795 lat (usec): min=203, max=801, avg=370.84, stdev=165.41 00:19:12.795 clat percentiles (usec): 00:19:12.795 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 215], 20.00th=[ 227], 00:19:12.795 | 30.00th=[ 239], 40.00th=[ 251], 50.00th=[ 269], 60.00th=[ 289], 00:19:12.795 | 70.00th=[ 453], 80.00th=[ 553], 90.00th=[ 619], 95.00th=[ 652], 00:19:12.795 | 99.00th=[ 717], 99.50th=[ 750], 99.90th=[ 758], 99.95th=[ 758], 00:19:12.795 | 99.99th=[ 758] 00:19:12.795 bw ( KiB/s): min= 4096, max= 4096, per=33.87%, avg=4096.00, stdev= 0.00, samples=1 00:19:12.795 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:12.795 lat (usec) : 250=38.35%, 500=33.46%, 750=24.06%, 1000=0.38% 00:19:12.795 lat (msec) : 50=3.76% 00:19:12.795 cpu : usr=0.69%, sys=0.99%, ctx=533, majf=0, minf=1 00:19:12.795 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:12.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.795 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.795 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.795 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:12.795 job2: (groupid=0, jobs=1): err= 0: pid=3544762: Fri Jul 26 22:49:05 2024 00:19:12.795 read: IOPS=18, BW=75.2KiB/s (77.1kB/s)(76.0KiB/1010msec) 00:19:12.795 slat (nsec): min=12329, max=37296, avg=26774.89, stdev=9000.94 00:19:12.795 clat (usec): min=40868, max=41267, avg=40982.29, stdev=92.90 00:19:12.795 lat (usec): min=40905, max=41290, avg=41009.06, stdev=88.59 00:19:12.795 clat percentiles (usec): 00:19:12.795 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:19:12.795 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:12.795 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:12.795 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:12.795 | 99.99th=[41157] 00:19:12.795 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:19:12.795 slat (nsec): min=7908, max=77759, avg=21921.14, stdev=13420.59 00:19:12.795 clat (usec): min=316, max=606, avg=421.98, stdev=52.87 00:19:12.795 lat (usec): min=330, max=645, avg=443.90, stdev=56.13 00:19:12.795 clat percentiles (usec): 00:19:12.795 | 1.00th=[ 326], 5.00th=[ 343], 10.00th=[ 359], 20.00th=[ 375], 00:19:12.795 | 30.00th=[ 392], 40.00th=[ 404], 50.00th=[ 412], 60.00th=[ 429], 00:19:12.795 | 70.00th=[ 445], 80.00th=[ 465], 90.00th=[ 494], 95.00th=[ 515], 00:19:12.795 | 99.00th=[ 578], 99.50th=[ 603], 99.90th=[ 603], 99.95th=[ 603], 00:19:12.795 | 99.99th=[ 603] 00:19:12.795 bw ( KiB/s): min= 4096, max= 4096, per=33.87%, avg=4096.00, stdev= 0.00, samples=1 00:19:12.795 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:12.795 lat (usec) : 500=88.32%, 750=8.10% 00:19:12.795 lat (msec) : 50=3.58% 00:19:12.795 cpu : usr=0.99%, sys=1.19%, ctx=531, majf=0, minf=1 00:19:12.795 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:12.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.795 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.795 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.795 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:12.795 job3: (groupid=0, jobs=1): err= 0: pid=3544763: Fri Jul 26 22:49:05 2024 00:19:12.795 read: IOPS=1265, BW=5063KiB/s (5184kB/s)(5068KiB/1001msec) 00:19:12.795 slat (nsec): min=5743, max=56757, avg=15725.49, stdev=5670.95 00:19:12.795 clat (usec): min=310, max=1000, avg=362.04, stdev=52.11 00:19:12.795 lat (usec): min=319, max=1013, avg=377.76, stdev=54.01 00:19:12.795 clat percentiles (usec): 00:19:12.795 | 1.00th=[ 314], 5.00th=[ 322], 10.00th=[ 330], 20.00th=[ 343], 00:19:12.795 | 30.00th=[ 351], 40.00th=[ 355], 50.00th=[ 359], 60.00th=[ 363], 00:19:12.795 | 70.00th=[ 363], 80.00th=[ 371], 90.00th=[ 379], 95.00th=[ 383], 00:19:12.795 | 99.00th=[ 644], 99.50th=[ 693], 99.90th=[ 963], 99.95th=[ 1004], 00:19:12.795 | 99.99th=[ 1004] 00:19:12.795 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:12.795 slat (nsec): min=8533, max=82128, avg=19667.35, stdev=9397.57 00:19:12.795 clat (usec): min=196, max=676, avg=310.34, stdev=99.40 00:19:12.795 lat (usec): min=207, max=686, avg=330.00, stdev=100.86 00:19:12.796 clat percentiles (usec): 00:19:12.796 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 229], 00:19:12.796 | 30.00th=[ 241], 40.00th=[ 251], 50.00th=[ 273], 60.00th=[ 293], 00:19:12.796 | 70.00th=[ 351], 80.00th=[ 404], 90.00th=[ 461], 95.00th=[ 502], 00:19:12.796 | 99.00th=[ 611], 99.50th=[ 635], 99.90th=[ 660], 99.95th=[ 676], 00:19:12.796 | 99.99th=[ 676] 00:19:12.796 bw ( KiB/s): min= 6160, max= 6160, per=50.93%, avg=6160.00, stdev= 0.00, samples=1 00:19:12.796 iops : min= 1540, max= 1540, avg=1540.00, stdev= 0.00, samples=1 00:19:12.796 lat (usec) : 250=21.16%, 500=75.10%, 750=3.53%, 1000=0.18% 00:19:12.796 lat (msec) : 2=0.04% 00:19:12.796 cpu : usr=3.40%, sys=7.20%, ctx=2803, majf=0, minf=1 00:19:12.796 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:12.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.796 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.796 issued rwts: total=1267,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.796 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:12.796 00:19:12.796 Run status group 0 (all jobs): 00:19:12.796 READ: bw=5228KiB/s (5354kB/s), 75.2KiB/s-5063KiB/s (77.1kB/s-5184kB/s), io=5312KiB (5439kB), run=1001-1016msec 00:19:12.796 WRITE: bw=11.8MiB/s (12.4MB/s), 2016KiB/s-6138KiB/s (2064kB/s-6285kB/s), io=12.0MiB (12.6MB), run=1001-1016msec 00:19:12.796 00:19:12.796 Disk stats (read/write): 00:19:12.796 nvme0n1: ios=69/512, merge=0/0, ticks=1302/114, in_queue=1416, util=97.60% 00:19:12.796 nvme0n2: ios=39/512, merge=0/0, ticks=1599/183, in_queue=1782, util=96.54% 00:19:12.796 nvme0n3: ios=15/512, merge=0/0, ticks=616/191, in_queue=807, util=89.02% 00:19:12.796 nvme0n4: ios=1024/1270, merge=0/0, ticks=363/398, in_queue=761, util=89.67% 00:19:12.796 22:49:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:12.796 [global] 00:19:12.796 thread=1 00:19:12.796 invalidate=1 00:19:12.796 rw=write 00:19:12.796 time_based=1 00:19:12.796 runtime=1 00:19:12.796 ioengine=libaio 00:19:12.796 direct=1 00:19:12.796 bs=4096 00:19:12.796 iodepth=128 00:19:12.796 norandommap=0 00:19:12.796 numjobs=1 00:19:12.796 00:19:12.796 verify_dump=1 00:19:12.796 verify_backlog=512 00:19:12.796 verify_state_save=0 00:19:12.796 do_verify=1 00:19:12.796 verify=crc32c-intel 00:19:12.796 [job0] 00:19:12.796 filename=/dev/nvme0n1 00:19:12.796 [job1] 00:19:12.796 filename=/dev/nvme0n2 00:19:12.796 [job2] 00:19:12.796 filename=/dev/nvme0n3 00:19:12.796 [job3] 00:19:12.796 filename=/dev/nvme0n4 00:19:12.796 Could not set queue depth (nvme0n1) 00:19:12.796 Could not set queue depth (nvme0n2) 00:19:12.796 Could not set queue depth (nvme0n3) 00:19:12.796 Could not set queue depth (nvme0n4) 00:19:12.796 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:12.796 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:12.796 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:12.796 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:12.796 fio-3.35 00:19:12.796 Starting 4 threads 00:19:14.182 00:19:14.182 job0: (groupid=0, jobs=1): err= 0: pid=3544995: Fri Jul 26 22:49:06 2024 00:19:14.182 read: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(12.4MiB/1044msec) 00:19:14.182 slat (usec): min=3, max=7296, avg=121.39, stdev=628.03 00:19:14.182 clat (usec): min=9272, max=68490, avg=17042.59, stdev=8542.06 00:19:14.182 lat (usec): min=9286, max=68498, avg=17163.98, stdev=8579.31 00:19:14.182 clat percentiles (usec): 00:19:14.182 | 1.00th=[ 9765], 5.00th=[11207], 10.00th=[11731], 20.00th=[12911], 00:19:14.182 | 30.00th=[13960], 40.00th=[14484], 50.00th=[14877], 60.00th=[15664], 00:19:14.182 | 70.00th=[17171], 80.00th=[19006], 90.00th=[20317], 95.00th=[25822], 00:19:14.182 | 99.00th=[64226], 99.50th=[68682], 99.90th=[68682], 99.95th=[68682], 00:19:14.182 | 99.99th=[68682] 00:19:14.182 write: IOPS=3432, BW=13.4MiB/s (14.1MB/s)(14.0MiB/1044msec); 0 zone resets 00:19:14.182 slat (usec): min=4, max=29094, avg=161.39, stdev=1011.73 00:19:14.182 clat (usec): min=8868, max=74339, avg=21717.01, stdev=10388.94 00:19:14.182 lat (usec): min=8882, max=74365, avg=21878.41, stdev=10465.86 00:19:14.182 clat percentiles (usec): 00:19:14.182 | 1.00th=[ 9896], 5.00th=[10945], 10.00th=[11207], 20.00th=[15270], 00:19:14.182 | 30.00th=[17433], 40.00th=[19268], 50.00th=[19792], 60.00th=[20317], 00:19:14.182 | 70.00th=[21365], 80.00th=[23987], 90.00th=[36439], 95.00th=[44827], 00:19:14.182 | 99.00th=[57934], 99.50th=[73925], 99.90th=[73925], 99.95th=[73925], 00:19:14.182 | 99.99th=[73925] 00:19:14.182 bw ( KiB/s): min=13024, max=15400, per=23.37%, avg=14212.00, stdev=1680.09, samples=2 00:19:14.182 iops : min= 3256, max= 3850, avg=3553.00, stdev=420.02, samples=2 00:19:14.182 lat (msec) : 10=1.29%, 20=68.37%, 50=27.34%, 100=3.01% 00:19:14.182 cpu : usr=5.36%, sys=7.18%, ctx=357, majf=0, minf=1 00:19:14.182 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:14.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:14.182 issued rwts: total=3169,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.182 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.182 job1: (groupid=0, jobs=1): err= 0: pid=3544996: Fri Jul 26 22:49:06 2024 00:19:14.182 read: IOPS=3943, BW=15.4MiB/s (16.2MB/s)(15.5MiB/1004msec) 00:19:14.182 slat (usec): min=2, max=37106, avg=138.93, stdev=1222.43 00:19:14.182 clat (usec): min=1035, max=108333, avg=17433.07, stdev=14857.36 00:19:14.182 lat (msec): min=4, max=108, avg=17.57, stdev=14.96 00:19:14.182 clat percentiles (msec): 00:19:14.182 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:19:14.182 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 14], 00:19:14.182 | 70.00th=[ 15], 80.00th=[ 18], 90.00th=[ 32], 95.00th=[ 45], 00:19:14.182 | 99.00th=[ 99], 99.50th=[ 99], 99.90th=[ 99], 99.95th=[ 99], 00:19:14.182 | 99.99th=[ 109] 00:19:14.182 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:19:14.182 slat (usec): min=3, max=8406, avg=100.71, stdev=557.93 00:19:14.182 clat (usec): min=1019, max=72405, avg=14161.45, stdev=10819.13 00:19:14.182 lat (usec): min=1030, max=72419, avg=14262.16, stdev=10868.91 00:19:14.182 clat percentiles (usec): 00:19:14.182 | 1.00th=[ 4752], 5.00th=[ 7308], 10.00th=[ 8225], 20.00th=[ 9765], 00:19:14.182 | 30.00th=[10159], 40.00th=[10814], 50.00th=[11469], 60.00th=[12387], 00:19:14.182 | 70.00th=[12911], 80.00th=[13960], 90.00th=[18482], 95.00th=[42730], 00:19:14.182 | 99.00th=[69731], 99.50th=[71828], 99.90th=[72877], 99.95th=[72877], 00:19:14.182 | 99.99th=[72877] 00:19:14.182 bw ( KiB/s): min=16384, max=16384, per=26.94%, avg=16384.00, stdev= 0.00, samples=2 00:19:14.182 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:19:14.182 lat (msec) : 2=0.15%, 10=19.71%, 20=66.73%, 50=10.34%, 100=3.05% 00:19:14.182 lat (msec) : 250=0.01% 00:19:14.182 cpu : usr=5.58%, sys=6.38%, ctx=373, majf=0, minf=1 00:19:14.182 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:14.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:14.182 issued rwts: total=3959,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.182 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.182 job2: (groupid=0, jobs=1): err= 0: pid=3544997: Fri Jul 26 22:49:06 2024 00:19:14.182 read: IOPS=4206, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1003msec) 00:19:14.182 slat (usec): min=2, max=54307, avg=102.07, stdev=1093.28 00:19:14.182 clat (usec): min=666, max=64633, avg=14678.26, stdev=6647.95 00:19:14.182 lat (usec): min=688, max=64639, avg=14780.33, stdev=6710.05 00:19:14.182 clat percentiles (usec): 00:19:14.182 | 1.00th=[ 1516], 5.00th=[ 6325], 10.00th=[ 9241], 20.00th=[11338], 00:19:14.182 | 30.00th=[12518], 40.00th=[13173], 50.00th=[13829], 60.00th=[15008], 00:19:14.182 | 70.00th=[15795], 80.00th=[17171], 90.00th=[20841], 95.00th=[23462], 00:19:14.182 | 99.00th=[30802], 99.50th=[64226], 99.90th=[64750], 99.95th=[64750], 00:19:14.182 | 99.99th=[64750] 00:19:14.182 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:19:14.182 slat (usec): min=3, max=14884, avg=89.20, stdev=619.86 00:19:14.182 clat (usec): min=495, max=74549, avg=14203.97, stdev=9002.88 00:19:14.182 lat (usec): min=913, max=74557, avg=14293.18, stdev=9029.57 00:19:14.182 clat percentiles (usec): 00:19:14.182 | 1.00th=[ 2212], 5.00th=[ 4948], 10.00th=[ 7046], 20.00th=[ 9241], 00:19:14.182 | 30.00th=[10683], 40.00th=[12256], 50.00th=[13304], 60.00th=[14091], 00:19:14.182 | 70.00th=[15401], 80.00th=[17171], 90.00th=[19268], 95.00th=[22938], 00:19:14.182 | 99.00th=[65799], 99.50th=[68682], 99.90th=[74974], 99.95th=[74974], 00:19:14.182 | 99.99th=[74974] 00:19:14.182 bw ( KiB/s): min=16344, max=20480, per=30.28%, avg=18412.00, stdev=2924.59, samples=2 00:19:14.182 iops : min= 4086, max= 5120, avg=4603.00, stdev=731.15, samples=2 00:19:14.182 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.07% 00:19:14.182 lat (msec) : 2=1.01%, 4=2.10%, 10=15.25%, 20=71.34%, 50=8.77% 00:19:14.182 lat (msec) : 100=1.44% 00:19:14.182 cpu : usr=3.99%, sys=6.29%, ctx=374, majf=0, minf=1 00:19:14.182 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:14.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:14.182 issued rwts: total=4219,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.182 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.182 job3: (groupid=0, jobs=1): err= 0: pid=3544999: Fri Jul 26 22:49:06 2024 00:19:14.182 read: IOPS=3540, BW=13.8MiB/s (14.5MB/s)(13.9MiB/1002msec) 00:19:14.182 slat (usec): min=2, max=17110, avg=125.07, stdev=848.11 00:19:14.182 clat (usec): min=1227, max=41850, avg=16753.06, stdev=6581.15 00:19:14.182 lat (usec): min=4256, max=44602, avg=16878.13, stdev=6642.85 00:19:14.182 clat percentiles (usec): 00:19:14.182 | 1.00th=[ 4752], 5.00th=[10159], 10.00th=[11338], 20.00th=[12387], 00:19:14.183 | 30.00th=[12649], 40.00th=[13173], 50.00th=[13960], 60.00th=[14746], 00:19:14.183 | 70.00th=[16909], 80.00th=[24249], 90.00th=[28181], 95.00th=[30016], 00:19:14.183 | 99.00th=[32637], 99.50th=[33424], 99.90th=[40633], 99.95th=[41681], 00:19:14.183 | 99.99th=[41681] 00:19:14.183 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:19:14.183 slat (usec): min=3, max=13101, avg=137.73, stdev=733.34 00:19:14.183 clat (usec): min=1394, max=57504, avg=18060.00, stdev=11857.19 00:19:14.183 lat (usec): min=1405, max=57516, avg=18197.74, stdev=11929.06 00:19:14.183 clat percentiles (usec): 00:19:14.183 | 1.00th=[ 4621], 5.00th=[ 9241], 10.00th=[10814], 20.00th=[12256], 00:19:14.183 | 30.00th=[12780], 40.00th=[13173], 50.00th=[13304], 60.00th=[13566], 00:19:14.183 | 70.00th=[15533], 80.00th=[18482], 90.00th=[42206], 95.00th=[49546], 00:19:14.183 | 99.00th=[52691], 99.50th=[52691], 99.90th=[57410], 99.95th=[57410], 00:19:14.183 | 99.99th=[57410] 00:19:14.183 bw ( KiB/s): min=12288, max=16384, per=23.57%, avg=14336.00, stdev=2896.31, samples=2 00:19:14.183 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:19:14.183 lat (msec) : 2=0.11%, 4=0.35%, 10=5.82%, 20=72.95%, 50=18.40% 00:19:14.183 lat (msec) : 100=2.37% 00:19:14.183 cpu : usr=4.00%, sys=8.69%, ctx=386, majf=0, minf=1 00:19:14.183 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:19:14.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.183 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:14.183 issued rwts: total=3548,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.183 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.183 00:19:14.183 Run status group 0 (all jobs): 00:19:14.183 READ: bw=55.7MiB/s (58.4MB/s), 11.9MiB/s-16.4MiB/s (12.4MB/s-17.2MB/s), io=58.2MiB (61.0MB), run=1002-1044msec 00:19:14.183 WRITE: bw=59.4MiB/s (62.3MB/s), 13.4MiB/s-17.9MiB/s (14.1MB/s-18.8MB/s), io=62.0MiB (65.0MB), run=1002-1044msec 00:19:14.183 00:19:14.183 Disk stats (read/write): 00:19:14.183 nvme0n1: ios=2601/3072, merge=0/0, ticks=12678/21895, in_queue=34573, util=97.80% 00:19:14.183 nvme0n2: ios=3299/3584, merge=0/0, ticks=21577/17774, in_queue=39351, util=98.88% 00:19:14.183 nvme0n3: ios=4009/4096, merge=0/0, ticks=46248/39279, in_queue=85527, util=97.49% 00:19:14.183 nvme0n4: ios=2644/3072, merge=0/0, ticks=26713/35990, in_queue=62703, util=96.95% 00:19:14.183 22:49:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:14.183 [global] 00:19:14.183 thread=1 00:19:14.183 invalidate=1 00:19:14.183 rw=randwrite 00:19:14.183 time_based=1 00:19:14.183 runtime=1 00:19:14.183 ioengine=libaio 00:19:14.183 direct=1 00:19:14.183 bs=4096 00:19:14.183 iodepth=128 00:19:14.183 norandommap=0 00:19:14.183 numjobs=1 00:19:14.183 00:19:14.183 verify_dump=1 00:19:14.183 verify_backlog=512 00:19:14.183 verify_state_save=0 00:19:14.183 do_verify=1 00:19:14.183 verify=crc32c-intel 00:19:14.183 [job0] 00:19:14.183 filename=/dev/nvme0n1 00:19:14.183 [job1] 00:19:14.183 filename=/dev/nvme0n2 00:19:14.183 [job2] 00:19:14.183 filename=/dev/nvme0n3 00:19:14.183 [job3] 00:19:14.183 filename=/dev/nvme0n4 00:19:14.183 Could not set queue depth (nvme0n1) 00:19:14.183 Could not set queue depth (nvme0n2) 00:19:14.183 Could not set queue depth (nvme0n3) 00:19:14.183 Could not set queue depth (nvme0n4) 00:19:14.480 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:14.480 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:14.480 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:14.480 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:14.480 fio-3.35 00:19:14.480 Starting 4 threads 00:19:15.855 00:19:15.855 job0: (groupid=0, jobs=1): err= 0: pid=3545227: Fri Jul 26 22:49:07 2024 00:19:15.855 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:19:15.855 slat (usec): min=3, max=15207, avg=107.99, stdev=586.09 00:19:15.855 clat (usec): min=8564, max=69537, avg=14475.80, stdev=8745.52 00:19:15.855 lat (usec): min=8805, max=69542, avg=14583.79, stdev=8779.61 00:19:15.855 clat percentiles (usec): 00:19:15.855 | 1.00th=[ 9372], 5.00th=[10290], 10.00th=[10945], 20.00th=[11338], 00:19:15.855 | 30.00th=[11731], 40.00th=[12125], 50.00th=[12649], 60.00th=[13173], 00:19:15.855 | 70.00th=[13566], 80.00th=[14222], 90.00th=[16319], 95.00th=[21890], 00:19:15.855 | 99.00th=[69731], 99.50th=[69731], 99.90th=[69731], 99.95th=[69731], 00:19:15.855 | 99.99th=[69731] 00:19:15.855 write: IOPS=5008, BW=19.6MiB/s (20.5MB/s)(19.6MiB/1002msec); 0 zone resets 00:19:15.855 slat (usec): min=4, max=7118, avg=88.26, stdev=374.80 00:19:15.855 clat (usec): min=513, max=18952, avg=11864.10, stdev=2028.11 00:19:15.855 lat (usec): min=3062, max=18971, avg=11952.36, stdev=2017.56 00:19:15.855 clat percentiles (usec): 00:19:15.855 | 1.00th=[ 6718], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[10159], 00:19:15.855 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11731], 60.00th=[12125], 00:19:15.855 | 70.00th=[12911], 80.00th=[13304], 90.00th=[13960], 95.00th=[15664], 00:19:15.855 | 99.00th=[18220], 99.50th=[18220], 99.90th=[19006], 99.95th=[19006], 00:19:15.855 | 99.99th=[19006] 00:19:15.855 bw ( KiB/s): min=17226, max=21936, per=27.16%, avg=19581.00, stdev=3330.47, samples=2 00:19:15.855 iops : min= 4306, max= 5484, avg=4895.00, stdev=832.97, samples=2 00:19:15.855 lat (usec) : 750=0.01% 00:19:15.855 lat (msec) : 4=0.34%, 10=10.40%, 20=86.05%, 50=1.95%, 100=1.25% 00:19:15.855 cpu : usr=6.99%, sys=11.89%, ctx=545, majf=0, minf=1 00:19:15.855 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:19:15.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:15.855 issued rwts: total=4608,5019,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.855 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:15.855 job1: (groupid=0, jobs=1): err= 0: pid=3545228: Fri Jul 26 22:49:07 2024 00:19:15.855 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:19:15.855 slat (usec): min=2, max=10548, avg=97.83, stdev=639.97 00:19:15.855 clat (usec): min=5958, max=56134, avg=12763.06, stdev=3232.14 00:19:15.855 lat (usec): min=5966, max=56140, avg=12860.90, stdev=3264.09 00:19:15.855 clat percentiles (usec): 00:19:15.855 | 1.00th=[ 8455], 5.00th=[ 9372], 10.00th=[10552], 20.00th=[11207], 00:19:15.855 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:19:15.855 | 70.00th=[13173], 80.00th=[14353], 90.00th=[16188], 95.00th=[17695], 00:19:15.855 | 99.00th=[21103], 99.50th=[24249], 99.90th=[50070], 99.95th=[50070], 00:19:15.855 | 99.99th=[56361] 00:19:15.855 write: IOPS=5351, BW=20.9MiB/s (21.9MB/s)(21.0MiB/1003msec); 0 zone resets 00:19:15.855 slat (usec): min=3, max=8495, avg=82.67, stdev=485.46 00:19:15.855 clat (usec): min=358, max=22201, avg=11271.47, stdev=2414.80 00:19:15.855 lat (usec): min=2575, max=23420, avg=11354.13, stdev=2419.27 00:19:15.855 clat percentiles (usec): 00:19:15.855 | 1.00th=[ 4883], 5.00th=[ 7046], 10.00th=[ 7767], 20.00th=[ 9372], 00:19:15.855 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11731], 60.00th=[11994], 00:19:15.855 | 70.00th=[12256], 80.00th=[12518], 90.00th=[13566], 95.00th=[15270], 00:19:15.855 | 99.00th=[18220], 99.50th=[19006], 99.90th=[19530], 99.95th=[20055], 00:19:15.855 | 99.99th=[22152] 00:19:15.855 bw ( KiB/s): min=20480, max=21440, per=29.08%, avg=20960.00, stdev=678.82, samples=2 00:19:15.855 iops : min= 5120, max= 5360, avg=5240.00, stdev=169.71, samples=2 00:19:15.855 lat (usec) : 500=0.01% 00:19:15.855 lat (msec) : 4=0.24%, 10=15.49%, 20=83.24%, 50=0.93%, 100=0.09% 00:19:15.855 cpu : usr=5.79%, sys=10.98%, ctx=408, majf=0, minf=1 00:19:15.855 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:15.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:15.855 issued rwts: total=5120,5368,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.855 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:15.855 job2: (groupid=0, jobs=1): err= 0: pid=3545229: Fri Jul 26 22:49:07 2024 00:19:15.855 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:19:15.855 slat (usec): min=2, max=22119, avg=107.87, stdev=877.66 00:19:15.855 clat (usec): min=3112, max=45182, avg=15132.06, stdev=4791.11 00:19:15.855 lat (usec): min=3124, max=45188, avg=15239.94, stdev=4844.47 00:19:15.855 clat percentiles (usec): 00:19:15.855 | 1.00th=[ 4490], 5.00th=[ 9634], 10.00th=[11207], 20.00th=[12256], 00:19:15.855 | 30.00th=[12911], 40.00th=[13304], 50.00th=[13960], 60.00th=[14353], 00:19:15.855 | 70.00th=[15795], 80.00th=[18220], 90.00th=[21627], 95.00th=[23725], 00:19:15.855 | 99.00th=[29754], 99.50th=[39584], 99.90th=[45351], 99.95th=[45351], 00:19:15.855 | 99.99th=[45351] 00:19:15.855 write: IOPS=4616, BW=18.0MiB/s (18.9MB/s)(18.1MiB/1004msec); 0 zone resets 00:19:15.855 slat (usec): min=3, max=10533, avg=75.27, stdev=544.88 00:19:15.855 clat (usec): min=626, max=31454, avg=12366.99, stdev=5281.67 00:19:15.855 lat (usec): min=666, max=31467, avg=12442.26, stdev=5302.79 00:19:15.855 clat percentiles (usec): 00:19:15.855 | 1.00th=[ 1876], 5.00th=[ 4555], 10.00th=[ 6390], 20.00th=[ 8160], 00:19:15.855 | 30.00th=[ 9765], 40.00th=[11076], 50.00th=[12649], 60.00th=[13304], 00:19:15.855 | 70.00th=[13960], 80.00th=[15533], 90.00th=[17171], 95.00th=[23200], 00:19:15.855 | 99.00th=[29230], 99.50th=[29754], 99.90th=[31327], 99.95th=[31327], 00:19:15.855 | 99.99th=[31327] 00:19:15.855 bw ( KiB/s): min=16536, max=20328, per=25.57%, avg=18432.00, stdev=2681.35, samples=2 00:19:15.855 iops : min= 4134, max= 5082, avg=4608.00, stdev=670.34, samples=2 00:19:15.855 lat (usec) : 750=0.01%, 1000=0.01% 00:19:15.855 lat (msec) : 2=0.54%, 4=1.17%, 10=17.87%, 20=71.09%, 50=9.30% 00:19:15.855 cpu : usr=3.89%, sys=6.38%, ctx=361, majf=0, minf=1 00:19:15.855 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:19:15.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:15.855 issued rwts: total=4608,4635,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.855 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:15.855 job3: (groupid=0, jobs=1): err= 0: pid=3545230: Fri Jul 26 22:49:07 2024 00:19:15.855 read: IOPS=2892, BW=11.3MiB/s (11.8MB/s)(11.3MiB/1004msec) 00:19:15.855 slat (usec): min=2, max=38566, avg=198.58, stdev=1533.68 00:19:15.855 clat (usec): min=1041, max=93547, avg=22819.26, stdev=16559.25 00:19:15.855 lat (usec): min=4711, max=93560, avg=23017.84, stdev=16693.78 00:19:15.855 clat percentiles (usec): 00:19:15.855 | 1.00th=[ 5080], 5.00th=[11469], 10.00th=[13435], 20.00th=[14484], 00:19:15.855 | 30.00th=[14746], 40.00th=[14877], 50.00th=[15139], 60.00th=[16057], 00:19:15.855 | 70.00th=[22676], 80.00th=[27657], 90.00th=[45876], 95.00th=[62653], 00:19:15.855 | 99.00th=[88605], 99.50th=[91751], 99.90th=[93848], 99.95th=[93848], 00:19:15.855 | 99.99th=[93848] 00:19:15.855 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:19:15.855 slat (usec): min=3, max=6561, avg=130.91, stdev=515.61 00:19:15.855 clat (usec): min=9323, max=93477, avg=19509.39, stdev=11116.16 00:19:15.855 lat (usec): min=9328, max=93483, avg=19640.29, stdev=11138.19 00:19:15.855 clat percentiles (usec): 00:19:15.855 | 1.00th=[10290], 5.00th=[11994], 10.00th=[12387], 20.00th=[13042], 00:19:15.855 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14353], 60.00th=[15270], 00:19:15.855 | 70.00th=[17957], 80.00th=[27132], 90.00th=[33424], 95.00th=[42206], 00:19:15.855 | 99.00th=[67634], 99.50th=[69731], 99.90th=[70779], 99.95th=[82314], 00:19:15.855 | 99.99th=[93848] 00:19:15.855 bw ( KiB/s): min= 8880, max=15696, per=17.05%, avg=12288.00, stdev=4819.64, samples=2 00:19:15.855 iops : min= 2220, max= 3924, avg=3072.00, stdev=1204.91, samples=2 00:19:15.855 lat (msec) : 2=0.02%, 10=1.39%, 20=70.05%, 50=22.84%, 100=5.71% 00:19:15.855 cpu : usr=2.99%, sys=3.09%, ctx=436, majf=0, minf=1 00:19:15.855 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:19:15.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:15.855 issued rwts: total=2904,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.855 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:15.855 00:19:15.855 Run status group 0 (all jobs): 00:19:15.856 READ: bw=67.1MiB/s (70.3MB/s), 11.3MiB/s-19.9MiB/s (11.8MB/s-20.9MB/s), io=67.3MiB (70.6MB), run=1002-1004msec 00:19:15.856 WRITE: bw=70.4MiB/s (73.8MB/s), 12.0MiB/s-20.9MiB/s (12.5MB/s-21.9MB/s), io=70.7MiB (74.1MB), run=1002-1004msec 00:19:15.856 00:19:15.856 Disk stats (read/write): 00:19:15.856 nvme0n1: ios=3890/4096, merge=0/0, ticks=14565/11119, in_queue=25684, util=96.59% 00:19:15.856 nvme0n2: ios=4209/4608, merge=0/0, ticks=35615/32948, in_queue=68563, util=97.46% 00:19:15.856 nvme0n3: ios=3640/4089, merge=0/0, ticks=54505/49247, in_queue=103752, util=97.38% 00:19:15.856 nvme0n4: ios=2617/2681, merge=0/0, ticks=20073/15696, in_queue=35769, util=97.46% 00:19:15.856 22:49:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:19:15.856 22:49:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3545365 00:19:15.856 22:49:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:15.856 22:49:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:19:15.856 [global] 00:19:15.856 thread=1 00:19:15.856 invalidate=1 00:19:15.856 rw=read 00:19:15.856 time_based=1 00:19:15.856 runtime=10 00:19:15.856 ioengine=libaio 00:19:15.856 direct=1 00:19:15.856 bs=4096 00:19:15.856 iodepth=1 00:19:15.856 norandommap=1 00:19:15.856 numjobs=1 00:19:15.856 00:19:15.856 [job0] 00:19:15.856 filename=/dev/nvme0n1 00:19:15.856 [job1] 00:19:15.856 filename=/dev/nvme0n2 00:19:15.856 [job2] 00:19:15.856 filename=/dev/nvme0n3 00:19:15.856 [job3] 00:19:15.856 filename=/dev/nvme0n4 00:19:15.856 Could not set queue depth (nvme0n1) 00:19:15.856 Could not set queue depth (nvme0n2) 00:19:15.856 Could not set queue depth (nvme0n3) 00:19:15.856 Could not set queue depth (nvme0n4) 00:19:15.856 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:15.856 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:15.856 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:15.856 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:15.856 fio-3.35 00:19:15.856 Starting 4 threads 00:19:19.136 22:49:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:19.136 22:49:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:19.136 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=811008, buflen=4096 00:19:19.136 fio: pid=3545485, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:19.136 22:49:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:19.136 22:49:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:19.136 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=11608064, buflen=4096 00:19:19.136 fio: pid=3545478, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:19.394 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=434176, buflen=4096 00:19:19.394 fio: pid=3545458, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:19.394 22:49:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:19.394 22:49:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:19.652 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=12222464, buflen=4096 00:19:19.652 fio: pid=3545461, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:19.652 22:49:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:19.652 22:49:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:19.652 00:19:19.652 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3545458: Fri Jul 26 22:49:12 2024 00:19:19.652 read: IOPS=31, BW=125KiB/s (128kB/s)(424KiB/3381msec) 00:19:19.652 slat (usec): min=6, max=11887, avg=229.35, stdev=1548.05 00:19:19.652 clat (usec): min=392, max=42957, avg=31440.65, stdev=17285.10 00:19:19.652 lat (usec): min=407, max=51997, avg=31560.02, stdev=17379.53 00:19:19.652 clat percentiles (usec): 00:19:19.652 | 1.00th=[ 400], 5.00th=[ 441], 10.00th=[ 494], 20.00th=[ 502], 00:19:19.652 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:19.652 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:19.652 | 99.00th=[41157], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:19:19.652 | 99.99th=[42730] 00:19:19.652 bw ( KiB/s): min= 96, max= 288, per=1.92%, avg=129.33, stdev=77.80, samples=6 00:19:19.652 iops : min= 24, max= 72, avg=32.33, stdev=19.45, samples=6 00:19:19.652 lat (usec) : 500=17.76%, 750=5.61% 00:19:19.652 lat (msec) : 50=75.70% 00:19:19.652 cpu : usr=0.12%, sys=0.00%, ctx=110, majf=0, minf=1 00:19:19.652 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:19.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.652 complete : 0=0.9%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.652 issued rwts: total=107,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.652 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:19.652 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3545461: Fri Jul 26 22:49:12 2024 00:19:19.652 read: IOPS=818, BW=3271KiB/s (3350kB/s)(11.7MiB/3649msec) 00:19:19.652 slat (usec): min=5, max=15477, avg=22.02, stdev=364.32 00:19:19.652 clat (usec): min=295, max=41561, avg=1189.65, stdev=5705.43 00:19:19.652 lat (usec): min=301, max=51953, avg=1211.67, stdev=5756.37 00:19:19.652 clat percentiles (usec): 00:19:19.652 | 1.00th=[ 306], 5.00th=[ 318], 10.00th=[ 322], 20.00th=[ 330], 00:19:19.653 | 30.00th=[ 338], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 367], 00:19:19.653 | 70.00th=[ 383], 80.00th=[ 404], 90.00th=[ 453], 95.00th=[ 486], 00:19:19.653 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:19:19.653 | 99.99th=[41681] 00:19:19.653 bw ( KiB/s): min= 96, max=10272, per=50.38%, avg=3381.29, stdev=4422.19, samples=7 00:19:19.653 iops : min= 24, max= 2568, avg=845.29, stdev=1105.57, samples=7 00:19:19.653 lat (usec) : 500=96.01%, 750=1.71%, 1000=0.13% 00:19:19.653 lat (msec) : 2=0.07%, 20=0.03%, 50=2.01% 00:19:19.653 cpu : usr=0.58%, sys=1.32%, ctx=2993, majf=0, minf=1 00:19:19.653 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:19.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.653 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.653 issued rwts: total=2985,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.653 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:19.653 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3545478: Fri Jul 26 22:49:12 2024 00:19:19.653 read: IOPS=901, BW=3603KiB/s (3690kB/s)(11.1MiB/3146msec) 00:19:19.653 slat (usec): min=5, max=15826, avg=16.33, stdev=297.10 00:19:19.653 clat (usec): min=302, max=41495, avg=1082.88, stdev=5346.94 00:19:19.653 lat (usec): min=308, max=41504, avg=1099.20, stdev=5356.32 00:19:19.653 clat percentiles (usec): 00:19:19.653 | 1.00th=[ 310], 5.00th=[ 322], 10.00th=[ 326], 20.00th=[ 330], 00:19:19.653 | 30.00th=[ 338], 40.00th=[ 343], 50.00th=[ 351], 60.00th=[ 359], 00:19:19.653 | 70.00th=[ 375], 80.00th=[ 400], 90.00th=[ 441], 95.00th=[ 474], 00:19:19.653 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:19.653 | 99.99th=[41681] 00:19:19.653 bw ( KiB/s): min= 96, max=10288, per=54.40%, avg=3652.00, stdev=4593.77, samples=6 00:19:19.653 iops : min= 24, max= 2572, avg=913.00, stdev=1148.44, samples=6 00:19:19.653 lat (usec) : 500=96.93%, 750=1.20% 00:19:19.653 lat (msec) : 2=0.04%, 4=0.04%, 50=1.76% 00:19:19.653 cpu : usr=0.79%, sys=1.34%, ctx=2838, majf=0, minf=1 00:19:19.653 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:19.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.653 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.653 issued rwts: total=2835,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.653 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:19.653 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3545485: Fri Jul 26 22:49:12 2024 00:19:19.653 read: IOPS=69, BW=275KiB/s (282kB/s)(792KiB/2881msec) 00:19:19.653 slat (nsec): min=8067, max=45645, avg=13699.48, stdev=8213.13 00:19:19.653 clat (usec): min=415, max=41282, avg=14386.10, stdev=19280.15 00:19:19.653 lat (usec): min=423, max=41298, avg=14399.81, stdev=19285.33 00:19:19.653 clat percentiles (usec): 00:19:19.653 | 1.00th=[ 420], 5.00th=[ 457], 10.00th=[ 465], 20.00th=[ 474], 00:19:19.653 | 30.00th=[ 474], 40.00th=[ 482], 50.00th=[ 486], 60.00th=[ 494], 00:19:19.653 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:19.653 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:19.653 | 99.99th=[41157] 00:19:19.653 bw ( KiB/s): min= 96, max= 1120, per=4.50%, avg=302.40, stdev=457.07, samples=5 00:19:19.653 iops : min= 24, max= 280, avg=75.60, stdev=114.27, samples=5 00:19:19.653 lat (usec) : 500=62.31%, 750=3.02% 00:19:19.653 lat (msec) : 50=34.17% 00:19:19.653 cpu : usr=0.00%, sys=0.17%, ctx=201, majf=0, minf=1 00:19:19.653 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:19.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.653 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.653 issued rwts: total=199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.653 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:19.653 00:19:19.653 Run status group 0 (all jobs): 00:19:19.653 READ: bw=6711KiB/s (6872kB/s), 125KiB/s-3603KiB/s (128kB/s-3690kB/s), io=23.9MiB (25.1MB), run=2881-3649msec 00:19:19.653 00:19:19.653 Disk stats (read/write): 00:19:19.653 nvme0n1: ios=151/0, merge=0/0, ticks=3546/0, in_queue=3546, util=99.31% 00:19:19.653 nvme0n2: ios=3027/0, merge=0/0, ticks=3644/0, in_queue=3644, util=98.85% 00:19:19.653 nvme0n3: ios=2885/0, merge=0/0, ticks=3851/0, in_queue=3851, util=99.19% 00:19:19.653 nvme0n4: ios=249/0, merge=0/0, ticks=2996/0, in_queue=2996, util=99.59% 00:19:19.911 22:49:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:19.911 22:49:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:20.168 22:49:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:20.168 22:49:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:20.426 22:49:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:20.426 22:49:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:20.684 22:49:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:20.684 22:49:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:20.941 22:49:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:19:20.941 22:49:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 3545365 00:19:20.941 22:49:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:19:20.941 22:49:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:21.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:21.197 22:49:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:21.197 22:49:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:19:21.197 22:49:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:19:21.197 22:49:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:21.197 22:49:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:19:21.197 22:49:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:21.197 22:49:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:19:21.197 22:49:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:21.197 22:49:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:21.197 nvmf hotplug test: fio failed as expected 00:19:21.197 22:49:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:21.455 22:49:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:21.455 22:49:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:21.455 22:49:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:21.455 22:49:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:21.455 22:49:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:19:21.455 22:49:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:21.455 22:49:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:19:21.455 22:49:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:21.455 22:49:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:19:21.455 22:49:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:21.455 22:49:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:21.455 rmmod nvme_tcp 00:19:21.455 rmmod nvme_fabrics 00:19:21.455 rmmod nvme_keyring 00:19:21.455 22:49:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:21.455 22:49:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:19:21.455 22:49:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:19:21.455 22:49:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3543469 ']' 00:19:21.455 22:49:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3543469 00:19:21.455 22:49:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 3543469 ']' 00:19:21.455 22:49:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 3543469 00:19:21.455 22:49:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:19:21.455 22:49:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:21.455 22:49:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3543469 00:19:21.455 22:49:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:21.455 22:49:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:21.455 22:49:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3543469' 00:19:21.455 killing process with pid 3543469 00:19:21.455 22:49:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 3543469 00:19:21.455 22:49:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 3543469 00:19:21.714 22:49:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:21.714 22:49:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:21.714 22:49:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:21.714 22:49:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:21.714 22:49:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:21.714 22:49:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.714 22:49:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:21.714 22:49:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.246 22:49:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:24.246 00:19:24.246 real 0m23.116s 00:19:24.246 user 1m20.969s 00:19:24.246 sys 0m6.238s 00:19:24.246 22:49:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:24.246 22:49:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.246 ************************************ 00:19:24.246 END TEST nvmf_fio_target 00:19:24.246 ************************************ 00:19:24.246 22:49:16 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:24.246 22:49:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:24.246 22:49:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:24.246 22:49:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:24.246 ************************************ 00:19:24.246 START TEST nvmf_bdevio 00:19:24.246 ************************************ 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:24.246 * Looking for test storage... 00:19:24.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:19:24.246 22:49:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:26.147 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:26.147 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:19:26.147 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:26.147 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:26.147 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:26.147 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:26.147 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:26.147 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:26.148 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:26.148 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:26.148 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:26.148 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:26.148 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:26.148 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:19:26.148 00:19:26.148 --- 10.0.0.2 ping statistics --- 00:19:26.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.148 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:26.148 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:26.148 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:19:26.148 00:19:26.148 --- 10.0.0.1 ping statistics --- 00:19:26.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.148 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3548081 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3548081 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 3548081 ']' 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:26.148 22:49:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:26.148 [2024-07-26 22:49:18.341134] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:19:26.148 [2024-07-26 22:49:18.341216] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:26.148 EAL: No free 2048 kB hugepages reported on node 1 00:19:26.149 [2024-07-26 22:49:18.408389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:26.149 [2024-07-26 22:49:18.502210] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:26.149 [2024-07-26 22:49:18.502269] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:26.149 [2024-07-26 22:49:18.502295] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:26.149 [2024-07-26 22:49:18.502309] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:26.149 [2024-07-26 22:49:18.502320] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:26.149 [2024-07-26 22:49:18.502389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:26.149 [2024-07-26 22:49:18.502442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:26.149 [2024-07-26 22:49:18.502497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:26.149 [2024-07-26 22:49:18.502500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:26.149 22:49:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:26.149 22:49:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:19:26.149 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:26.149 22:49:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:26.149 22:49:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:26.407 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:26.407 22:49:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:26.407 22:49:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.407 22:49:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:26.407 [2024-07-26 22:49:18.656718] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:26.407 22:49:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.407 22:49:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:26.407 22:49:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.407 22:49:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:26.407 Malloc0 00:19:26.407 22:49:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.407 22:49:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:26.407 22:49:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.407 22:49:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:26.407 22:49:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.407 22:49:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:26.407 22:49:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.407 22:49:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:26.407 22:49:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.407 22:49:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:26.407 22:49:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.407 22:49:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:26.407 [2024-07-26 22:49:18.707839] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:26.407 22:49:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.407 22:49:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:26.407 22:49:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:26.407 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:19:26.407 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:19:26.407 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:26.407 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:26.407 { 00:19:26.407 "params": { 00:19:26.407 "name": "Nvme$subsystem", 00:19:26.407 "trtype": "$TEST_TRANSPORT", 00:19:26.407 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:26.407 "adrfam": "ipv4", 00:19:26.407 "trsvcid": "$NVMF_PORT", 00:19:26.407 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:26.407 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:26.407 "hdgst": ${hdgst:-false}, 00:19:26.407 "ddgst": ${ddgst:-false} 00:19:26.407 }, 00:19:26.407 "method": "bdev_nvme_attach_controller" 00:19:26.407 } 00:19:26.407 EOF 00:19:26.407 )") 00:19:26.407 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:19:26.407 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:19:26.407 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:19:26.407 22:49:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:26.407 "params": { 00:19:26.407 "name": "Nvme1", 00:19:26.407 "trtype": "tcp", 00:19:26.407 "traddr": "10.0.0.2", 00:19:26.407 "adrfam": "ipv4", 00:19:26.407 "trsvcid": "4420", 00:19:26.407 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.407 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:26.407 "hdgst": false, 00:19:26.407 "ddgst": false 00:19:26.407 }, 00:19:26.407 "method": "bdev_nvme_attach_controller" 00:19:26.407 }' 00:19:26.407 [2024-07-26 22:49:18.751250] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:19:26.407 [2024-07-26 22:49:18.751325] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3548220 ] 00:19:26.407 EAL: No free 2048 kB hugepages reported on node 1 00:19:26.407 [2024-07-26 22:49:18.812614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:26.407 [2024-07-26 22:49:18.905532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:26.407 [2024-07-26 22:49:18.905586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.407 [2024-07-26 22:49:18.905589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.973 I/O targets: 00:19:26.973 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:26.973 00:19:26.973 00:19:26.973 CUnit - A unit testing framework for C - Version 2.1-3 00:19:26.973 http://cunit.sourceforge.net/ 00:19:26.973 00:19:26.973 00:19:26.973 Suite: bdevio tests on: Nvme1n1 00:19:26.973 Test: blockdev write read block ...passed 00:19:26.973 Test: blockdev write zeroes read block ...passed 00:19:26.973 Test: blockdev write zeroes read no split ...passed 00:19:26.973 Test: blockdev write zeroes read split ...passed 00:19:26.973 Test: blockdev write zeroes read split partial ...passed 00:19:26.973 Test: blockdev reset ...[2024-07-26 22:49:19.418974] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:26.973 [2024-07-26 22:49:19.419095] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb25a00 (9): Bad file descriptor 00:19:26.973 [2024-07-26 22:49:19.470194] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:26.973 passed 00:19:27.230 Test: blockdev write read 8 blocks ...passed 00:19:27.230 Test: blockdev write read size > 128k ...passed 00:19:27.230 Test: blockdev write read invalid size ...passed 00:19:27.230 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:27.230 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:27.230 Test: blockdev write read max offset ...passed 00:19:27.230 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:27.230 Test: blockdev writev readv 8 blocks ...passed 00:19:27.230 Test: blockdev writev readv 30 x 1block ...passed 00:19:27.230 Test: blockdev writev readv block ...passed 00:19:27.230 Test: blockdev writev readv size > 128k ...passed 00:19:27.230 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:27.230 Test: blockdev comparev and writev ...[2024-07-26 22:49:19.685450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:27.230 [2024-07-26 22:49:19.685486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:27.230 [2024-07-26 22:49:19.685512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:27.230 [2024-07-26 22:49:19.685529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:27.230 [2024-07-26 22:49:19.685909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:27.230 [2024-07-26 22:49:19.685934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:27.230 [2024-07-26 22:49:19.685957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:27.230 [2024-07-26 22:49:19.685973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:27.230 [2024-07-26 22:49:19.686365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:27.230 [2024-07-26 22:49:19.686389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:27.230 [2024-07-26 22:49:19.686411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:27.230 [2024-07-26 22:49:19.686428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:27.230 [2024-07-26 22:49:19.686785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:27.230 [2024-07-26 22:49:19.686809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:27.230 [2024-07-26 22:49:19.686839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:27.231 [2024-07-26 22:49:19.686856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:27.231 passed 00:19:27.487 Test: blockdev nvme passthru rw ...passed 00:19:27.487 Test: blockdev nvme passthru vendor specific ...[2024-07-26 22:49:19.771418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:27.487 [2024-07-26 22:49:19.771445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:27.487 [2024-07-26 22:49:19.771645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:27.487 [2024-07-26 22:49:19.771667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:27.487 [2024-07-26 22:49:19.771863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:27.487 [2024-07-26 22:49:19.771885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:27.487 [2024-07-26 22:49:19.772081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:27.487 [2024-07-26 22:49:19.772105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:27.487 passed 00:19:27.487 Test: blockdev nvme admin passthru ...passed 00:19:27.487 Test: blockdev copy ...passed 00:19:27.487 00:19:27.487 Run Summary: Type Total Ran Passed Failed Inactive 00:19:27.487 suites 1 1 n/a 0 0 00:19:27.487 tests 23 23 23 0 0 00:19:27.487 asserts 152 152 152 0 n/a 00:19:27.487 00:19:27.487 Elapsed time = 1.237 seconds 00:19:27.744 22:49:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:27.745 22:49:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.745 22:49:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:27.745 22:49:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.745 22:49:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:27.745 22:49:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:19:27.745 22:49:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:27.745 22:49:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:19:27.745 22:49:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:27.745 22:49:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:19:27.745 22:49:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:27.745 22:49:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:27.745 rmmod nvme_tcp 00:19:27.745 rmmod nvme_fabrics 00:19:27.745 rmmod nvme_keyring 00:19:27.745 22:49:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:27.745 22:49:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:19:27.745 22:49:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:19:27.745 22:49:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3548081 ']' 00:19:27.745 22:49:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3548081 00:19:27.745 22:49:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 3548081 ']' 00:19:27.745 22:49:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 3548081 00:19:27.745 22:49:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:19:27.745 22:49:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:27.745 22:49:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3548081 00:19:27.745 22:49:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:19:27.745 22:49:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:19:27.745 22:49:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3548081' 00:19:27.745 killing process with pid 3548081 00:19:27.745 22:49:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 3548081 00:19:27.745 22:49:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 3548081 00:19:28.004 22:49:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:28.004 22:49:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:28.004 22:49:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:28.004 22:49:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:28.004 22:49:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:28.004 22:49:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.004 22:49:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:28.004 22:49:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:29.906 22:49:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:29.906 00:19:29.906 real 0m6.205s 00:19:29.906 user 0m10.410s 00:19:29.906 sys 0m1.998s 00:19:29.906 22:49:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:29.906 22:49:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:29.906 ************************************ 00:19:29.906 END TEST nvmf_bdevio 00:19:29.906 ************************************ 00:19:29.906 22:49:22 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:29.906 22:49:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:30.164 22:49:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:30.164 22:49:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:30.164 ************************************ 00:19:30.164 START TEST nvmf_auth_target 00:19:30.164 ************************************ 00:19:30.164 22:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:30.164 * Looking for test storage... 00:19:30.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:30.164 22:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:30.164 22:49:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:30.164 22:49:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:30.164 22:49:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:30.164 22:49:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:30.164 22:49:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:30.164 22:49:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:30.164 22:49:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:30.164 22:49:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:30.164 22:49:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:30.164 22:49:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:30.164 22:49:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:30.164 22:49:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:30.164 22:49:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:30.164 22:49:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:30.164 22:49:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:30.164 22:49:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:30.164 22:49:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:30.165 22:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:32.065 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:32.065 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:32.065 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:32.065 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:32.065 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:32.324 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:32.324 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:32.324 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:32.324 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:32.324 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:32.324 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:32.325 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:32.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:32.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:19:32.325 00:19:32.325 --- 10.0.0.2 ping statistics --- 00:19:32.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.325 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:19:32.325 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:32.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:32.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:19:32.325 00:19:32.325 --- 10.0.0.1 ping statistics --- 00:19:32.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.325 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:19:32.325 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:32.325 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:19:32.325 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:32.325 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:32.325 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:32.325 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:32.325 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:32.325 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:32.325 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:32.325 22:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:19:32.325 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:32.325 22:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:32.325 22:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.325 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3550286 00:19:32.325 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3550286 00:19:32.325 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:32.325 22:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3550286 ']' 00:19:32.325 22:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.325 22:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:32.325 22:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.325 22:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:32.325 22:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.615 22:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:32.615 22:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:32.615 22:49:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:32.615 22:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:32.615 22:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.615 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:32.615 22:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3550315 00:19:32.615 22:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:32.615 22:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:32.615 22:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:19:32.615 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:32.615 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:32.615 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:32.615 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:19:32.615 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:32.615 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:32.615 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=62d112a406914d9c4085e4c3db7d97d49bd1d369be637e7c 00:19:32.615 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:32.615 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.r6m 00:19:32.615 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 62d112a406914d9c4085e4c3db7d97d49bd1d369be637e7c 0 00:19:32.615 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 62d112a406914d9c4085e4c3db7d97d49bd1d369be637e7c 0 00:19:32.615 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:32.615 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:32.615 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=62d112a406914d9c4085e4c3db7d97d49bd1d369be637e7c 00:19:32.615 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:19:32.615 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:32.615 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.r6m 00:19:32.615 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.r6m 00:19:32.615 22:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.r6m 00:19:32.615 22:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:19:32.615 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:32.615 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:32.615 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:32.615 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:32.615 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:32.615 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:32.615 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=749b814c77c3855b2052aa422da183ff5a5cb3d0f5793bce0d50b206ad506437 00:19:32.615 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:32.615 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.KIJ 00:19:32.615 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 749b814c77c3855b2052aa422da183ff5a5cb3d0f5793bce0d50b206ad506437 3 00:19:32.616 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 749b814c77c3855b2052aa422da183ff5a5cb3d0f5793bce0d50b206ad506437 3 00:19:32.616 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:32.616 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:32.616 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=749b814c77c3855b2052aa422da183ff5a5cb3d0f5793bce0d50b206ad506437 00:19:32.616 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:32.616 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.KIJ 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.KIJ 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.KIJ 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=de8ff5a4561aa5d6b95b5b64c12937cc 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.0kC 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key de8ff5a4561aa5d6b95b5b64c12937cc 1 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 de8ff5a4561aa5d6b95b5b64c12937cc 1 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=de8ff5a4561aa5d6b95b5b64c12937cc 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.0kC 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.0kC 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.0kC 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6aad09c2ba1f51b8b221dfc37e3c01814ad1a6d310b7682a 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.VRj 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6aad09c2ba1f51b8b221dfc37e3c01814ad1a6d310b7682a 2 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6aad09c2ba1f51b8b221dfc37e3c01814ad1a6d310b7682a 2 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6aad09c2ba1f51b8b221dfc37e3c01814ad1a6d310b7682a 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.VRj 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.VRj 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.VRj 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4f8e74079ccc2d9e87af15c4a59be835119c01ab8f5847ee 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.oSL 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4f8e74079ccc2d9e87af15c4a59be835119c01ab8f5847ee 2 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4f8e74079ccc2d9e87af15c4a59be835119c01ab8f5847ee 2 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4f8e74079ccc2d9e87af15c4a59be835119c01ab8f5847ee 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.oSL 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.oSL 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.oSL 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4b7e107267ec5b0adb4d49e8c9993444 00:19:32.881 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:32.882 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.LXo 00:19:32.882 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4b7e107267ec5b0adb4d49e8c9993444 1 00:19:32.882 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4b7e107267ec5b0adb4d49e8c9993444 1 00:19:32.882 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:32.882 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:32.882 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4b7e107267ec5b0adb4d49e8c9993444 00:19:32.882 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:32.882 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:32.882 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.LXo 00:19:32.882 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.LXo 00:19:32.882 22:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.LXo 00:19:32.882 22:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:19:32.882 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:32.882 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:32.882 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:32.882 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:32.882 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:32.882 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:32.882 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6c23213cd2a0391c120ae8b237d8e08261f41ba5a6f9db59e74784450459b6ea 00:19:32.882 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:32.882 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.qqE 00:19:32.882 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6c23213cd2a0391c120ae8b237d8e08261f41ba5a6f9db59e74784450459b6ea 3 00:19:32.882 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6c23213cd2a0391c120ae8b237d8e08261f41ba5a6f9db59e74784450459b6ea 3 00:19:32.882 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:32.882 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:32.882 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6c23213cd2a0391c120ae8b237d8e08261f41ba5a6f9db59e74784450459b6ea 00:19:32.882 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:32.882 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:33.141 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.qqE 00:19:33.141 22:49:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.qqE 00:19:33.141 22:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.qqE 00:19:33.141 22:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:19:33.141 22:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3550286 00:19:33.141 22:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3550286 ']' 00:19:33.141 22:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.141 22:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:33.141 22:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.141 22:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:33.141 22:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.399 22:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:33.399 22:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:33.399 22:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3550315 /var/tmp/host.sock 00:19:33.399 22:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3550315 ']' 00:19:33.399 22:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:19:33.399 22:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:33.399 22:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:33.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:33.399 22:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:33.399 22:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.399 22:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:33.399 22:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:33.399 22:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:19:33.399 22:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.399 22:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.656 22:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.657 22:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:33.657 22:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.r6m 00:19:33.657 22:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.657 22:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.657 22:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.657 22:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.r6m 00:19:33.657 22:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.r6m 00:19:33.915 22:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.KIJ ]] 00:19:33.915 22:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.KIJ 00:19:33.915 22:49:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.915 22:49:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.915 22:49:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.915 22:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.KIJ 00:19:33.915 22:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.KIJ 00:19:34.173 22:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:34.173 22:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.0kC 00:19:34.173 22:49:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.173 22:49:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.173 22:49:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.173 22:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.0kC 00:19:34.173 22:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.0kC 00:19:34.431 22:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.VRj ]] 00:19:34.431 22:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VRj 00:19:34.431 22:49:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.431 22:49:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.431 22:49:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.431 22:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VRj 00:19:34.431 22:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VRj 00:19:34.688 22:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:34.688 22:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.oSL 00:19:34.688 22:49:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.688 22:49:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.688 22:49:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.688 22:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.oSL 00:19:34.688 22:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.oSL 00:19:34.946 22:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.LXo ]] 00:19:34.946 22:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.LXo 00:19:34.946 22:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.946 22:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.946 22:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.946 22:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.LXo 00:19:34.946 22:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.LXo 00:19:35.204 22:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:35.204 22:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.qqE 00:19:35.204 22:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.204 22:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.204 22:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.204 22:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.qqE 00:19:35.204 22:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.qqE 00:19:35.462 22:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:19:35.462 22:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:35.462 22:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:35.462 22:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.462 22:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:35.462 22:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:35.462 22:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:19:35.462 22:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.462 22:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:35.462 22:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:35.462 22:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:35.462 22:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.462 22:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.462 22:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.462 22:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.720 22:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.720 22:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.720 22:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.979 00:19:35.979 22:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.979 22:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.979 22:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.235 22:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.235 22:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.235 22:49:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.235 22:49:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.235 22:49:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.235 22:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.235 { 00:19:36.235 "cntlid": 1, 00:19:36.235 "qid": 0, 00:19:36.235 "state": "enabled", 00:19:36.235 "listen_address": { 00:19:36.235 "trtype": "TCP", 00:19:36.235 "adrfam": "IPv4", 00:19:36.235 "traddr": "10.0.0.2", 00:19:36.235 "trsvcid": "4420" 00:19:36.235 }, 00:19:36.235 "peer_address": { 00:19:36.235 "trtype": "TCP", 00:19:36.235 "adrfam": "IPv4", 00:19:36.235 "traddr": "10.0.0.1", 00:19:36.235 "trsvcid": "34196" 00:19:36.235 }, 00:19:36.235 "auth": { 00:19:36.235 "state": "completed", 00:19:36.235 "digest": "sha256", 00:19:36.235 "dhgroup": "null" 00:19:36.235 } 00:19:36.235 } 00:19:36.235 ]' 00:19:36.235 22:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.235 22:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:36.235 22:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.235 22:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:36.235 22:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.235 22:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.235 22:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.235 22:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.492 22:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjJkMTEyYTQwNjkxNGQ5YzQwODVlNGMzZGI3ZDk3ZDQ5YmQxZDM2OWJlNjM3ZTdjd/AfXg==: --dhchap-ctrl-secret DHHC-1:03:NzQ5YjgxNGM3N2MzODU1YjIwNTJhYTQyMmRhMTgzZmY1YTVjYjNkMGY1NzkzYmNlMGQ1MGIyMDZhZDUwNjQzN/RBk0Q=: 00:19:37.426 22:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.426 22:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:37.426 22:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.426 22:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.426 22:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.426 22:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.426 22:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:37.426 22:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:37.684 22:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:19:37.684 22:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.684 22:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:37.684 22:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:37.684 22:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:37.684 22:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.684 22:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.684 22:49:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.684 22:49:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.684 22:49:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.684 22:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.684 22:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.941 00:19:37.941 22:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.941 22:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.941 22:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.199 22:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.199 22:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.199 22:49:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.199 22:49:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.457 22:49:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.457 22:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.457 { 00:19:38.457 "cntlid": 3, 00:19:38.457 "qid": 0, 00:19:38.457 "state": "enabled", 00:19:38.457 "listen_address": { 00:19:38.457 "trtype": "TCP", 00:19:38.457 "adrfam": "IPv4", 00:19:38.457 "traddr": "10.0.0.2", 00:19:38.457 "trsvcid": "4420" 00:19:38.457 }, 00:19:38.457 "peer_address": { 00:19:38.457 "trtype": "TCP", 00:19:38.457 "adrfam": "IPv4", 00:19:38.457 "traddr": "10.0.0.1", 00:19:38.457 "trsvcid": "54304" 00:19:38.457 }, 00:19:38.457 "auth": { 00:19:38.457 "state": "completed", 00:19:38.457 "digest": "sha256", 00:19:38.457 "dhgroup": "null" 00:19:38.457 } 00:19:38.457 } 00:19:38.457 ]' 00:19:38.457 22:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.457 22:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.457 22:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.457 22:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:38.457 22:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.457 22:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.457 22:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.457 22:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.715 22:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZGU4ZmY1YTQ1NjFhYTVkNmI5NWI1YjY0YzEyOTM3Y2N3iUfL: --dhchap-ctrl-secret DHHC-1:02:NmFhZDA5YzJiYTFmNTFiOGIyMjFkZmMzN2UzYzAxODE0YWQxYTZkMzEwYjc2ODJh8yuKtw==: 00:19:39.648 22:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.648 22:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.648 22:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.648 22:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.648 22:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.648 22:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.648 22:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:39.648 22:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:39.906 22:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:19:39.906 22:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.906 22:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:39.906 22:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:39.906 22:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:39.906 22:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.906 22:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.906 22:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.906 22:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.906 22:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.906 22:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.906 22:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.164 00:19:40.422 22:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.422 22:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.422 22:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.422 22:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.422 22:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.422 22:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.422 22:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.422 22:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.422 22:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.422 { 00:19:40.422 "cntlid": 5, 00:19:40.422 "qid": 0, 00:19:40.422 "state": "enabled", 00:19:40.422 "listen_address": { 00:19:40.422 "trtype": "TCP", 00:19:40.422 "adrfam": "IPv4", 00:19:40.422 "traddr": "10.0.0.2", 00:19:40.422 "trsvcid": "4420" 00:19:40.422 }, 00:19:40.422 "peer_address": { 00:19:40.422 "trtype": "TCP", 00:19:40.422 "adrfam": "IPv4", 00:19:40.422 "traddr": "10.0.0.1", 00:19:40.422 "trsvcid": "54328" 00:19:40.422 }, 00:19:40.422 "auth": { 00:19:40.422 "state": "completed", 00:19:40.422 "digest": "sha256", 00:19:40.422 "dhgroup": "null" 00:19:40.422 } 00:19:40.422 } 00:19:40.422 ]' 00:19:40.422 22:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.680 22:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.680 22:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.680 22:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:40.680 22:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.680 22:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.680 22:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.680 22:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.939 22:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGY4ZTc0MDc5Y2NjMmQ5ZTg3YWYxNWM0YTU5YmU4MzUxMTljMDFhYjhmNTg0N2VlQx4FVg==: --dhchap-ctrl-secret DHHC-1:01:NGI3ZTEwNzI2N2VjNWIwYWRiNGQ0OWU4Yzk5OTM0NDSBP0yX: 00:19:41.871 22:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.871 22:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.871 22:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.871 22:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.871 22:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.871 22:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.871 22:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:41.871 22:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:42.127 22:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:19:42.127 22:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.127 22:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:42.127 22:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:42.127 22:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:42.127 22:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.127 22:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:42.128 22:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.128 22:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.128 22:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.128 22:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:42.128 22:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:42.385 00:19:42.385 22:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.385 22:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.385 22:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.642 22:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.642 22:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.642 22:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.642 22:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.642 22:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.642 22:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.642 { 00:19:42.642 "cntlid": 7, 00:19:42.642 "qid": 0, 00:19:42.642 "state": "enabled", 00:19:42.642 "listen_address": { 00:19:42.642 "trtype": "TCP", 00:19:42.642 "adrfam": "IPv4", 00:19:42.642 "traddr": "10.0.0.2", 00:19:42.642 "trsvcid": "4420" 00:19:42.642 }, 00:19:42.642 "peer_address": { 00:19:42.642 "trtype": "TCP", 00:19:42.642 "adrfam": "IPv4", 00:19:42.642 "traddr": "10.0.0.1", 00:19:42.642 "trsvcid": "54356" 00:19:42.642 }, 00:19:42.642 "auth": { 00:19:42.642 "state": "completed", 00:19:42.642 "digest": "sha256", 00:19:42.642 "dhgroup": "null" 00:19:42.642 } 00:19:42.642 } 00:19:42.642 ]' 00:19:42.642 22:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.900 22:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.900 22:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.900 22:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:42.900 22:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.900 22:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.900 22:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.900 22:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.157 22:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmMyMzIxM2NkMmEwMzkxYzEyMGFlOGIyMzdkOGUwODI2MWY0MWJhNWE2ZjlkYjU5ZTc0Nzg0NDUwNDU5YjZlYSl7zGw=: 00:19:44.091 22:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.091 22:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:44.091 22:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.091 22:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.091 22:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.091 22:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:44.091 22:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.091 22:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:44.091 22:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:44.349 22:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:44.349 22:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.349 22:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:44.349 22:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:44.349 22:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:44.349 22:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.349 22:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.349 22:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.349 22:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.349 22:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.349 22:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.349 22:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.607 00:19:44.607 22:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.607 22:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.607 22:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.864 22:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.864 22:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.864 22:49:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.864 22:49:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.865 22:49:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.865 22:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.865 { 00:19:44.865 "cntlid": 9, 00:19:44.865 "qid": 0, 00:19:44.865 "state": "enabled", 00:19:44.865 "listen_address": { 00:19:44.865 "trtype": "TCP", 00:19:44.865 "adrfam": "IPv4", 00:19:44.865 "traddr": "10.0.0.2", 00:19:44.865 "trsvcid": "4420" 00:19:44.865 }, 00:19:44.865 "peer_address": { 00:19:44.865 "trtype": "TCP", 00:19:44.865 "adrfam": "IPv4", 00:19:44.865 "traddr": "10.0.0.1", 00:19:44.865 "trsvcid": "54384" 00:19:44.865 }, 00:19:44.865 "auth": { 00:19:44.865 "state": "completed", 00:19:44.865 "digest": "sha256", 00:19:44.865 "dhgroup": "ffdhe2048" 00:19:44.865 } 00:19:44.865 } 00:19:44.865 ]' 00:19:44.865 22:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.123 22:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.123 22:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.123 22:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:45.123 22:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.123 22:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.123 22:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.123 22:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.381 22:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjJkMTEyYTQwNjkxNGQ5YzQwODVlNGMzZGI3ZDk3ZDQ5YmQxZDM2OWJlNjM3ZTdjd/AfXg==: --dhchap-ctrl-secret DHHC-1:03:NzQ5YjgxNGM3N2MzODU1YjIwNTJhYTQyMmRhMTgzZmY1YTVjYjNkMGY1NzkzYmNlMGQ1MGIyMDZhZDUwNjQzN/RBk0Q=: 00:19:46.314 22:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.314 22:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:46.314 22:49:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.314 22:49:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.314 22:49:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.314 22:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:46.314 22:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:46.314 22:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:46.572 22:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:46.572 22:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.572 22:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:46.572 22:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:46.572 22:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:46.572 22:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.572 22:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.572 22:49:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.572 22:49:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.572 22:49:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.572 22:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.572 22:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.830 00:19:46.830 22:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.830 22:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.830 22:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.089 22:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.090 22:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.090 22:49:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.090 22:49:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.090 22:49:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.090 22:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:47.090 { 00:19:47.090 "cntlid": 11, 00:19:47.090 "qid": 0, 00:19:47.090 "state": "enabled", 00:19:47.090 "listen_address": { 00:19:47.090 "trtype": "TCP", 00:19:47.090 "adrfam": "IPv4", 00:19:47.090 "traddr": "10.0.0.2", 00:19:47.090 "trsvcid": "4420" 00:19:47.090 }, 00:19:47.090 "peer_address": { 00:19:47.090 "trtype": "TCP", 00:19:47.090 "adrfam": "IPv4", 00:19:47.090 "traddr": "10.0.0.1", 00:19:47.090 "trsvcid": "54414" 00:19:47.090 }, 00:19:47.090 "auth": { 00:19:47.090 "state": "completed", 00:19:47.090 "digest": "sha256", 00:19:47.090 "dhgroup": "ffdhe2048" 00:19:47.090 } 00:19:47.090 } 00:19:47.090 ]' 00:19:47.090 22:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.090 22:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.395 22:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.395 22:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:47.395 22:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.395 22:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.395 22:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.395 22:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.653 22:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZGU4ZmY1YTQ1NjFhYTVkNmI5NWI1YjY0YzEyOTM3Y2N3iUfL: --dhchap-ctrl-secret DHHC-1:02:NmFhZDA5YzJiYTFmNTFiOGIyMjFkZmMzN2UzYzAxODE0YWQxYTZkMzEwYjc2ODJh8yuKtw==: 00:19:48.586 22:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.586 22:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:48.586 22:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.586 22:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.586 22:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.586 22:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.586 22:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:48.586 22:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:48.844 22:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:19:48.844 22:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.844 22:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:48.844 22:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:48.844 22:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:48.844 22:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.844 22:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.844 22:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.844 22:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.844 22:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.844 22:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.844 22:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.102 00:19:49.102 22:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.102 22:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.102 22:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.360 22:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.360 22:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.360 22:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.360 22:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.360 22:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.360 22:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.360 { 00:19:49.360 "cntlid": 13, 00:19:49.360 "qid": 0, 00:19:49.360 "state": "enabled", 00:19:49.360 "listen_address": { 00:19:49.360 "trtype": "TCP", 00:19:49.360 "adrfam": "IPv4", 00:19:49.360 "traddr": "10.0.0.2", 00:19:49.360 "trsvcid": "4420" 00:19:49.360 }, 00:19:49.360 "peer_address": { 00:19:49.360 "trtype": "TCP", 00:19:49.360 "adrfam": "IPv4", 00:19:49.360 "traddr": "10.0.0.1", 00:19:49.360 "trsvcid": "45570" 00:19:49.360 }, 00:19:49.360 "auth": { 00:19:49.360 "state": "completed", 00:19:49.360 "digest": "sha256", 00:19:49.360 "dhgroup": "ffdhe2048" 00:19:49.360 } 00:19:49.360 } 00:19:49.360 ]' 00:19:49.360 22:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.360 22:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:49.360 22:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.360 22:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:49.360 22:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.360 22:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.360 22:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.360 22:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.617 22:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGY4ZTc0MDc5Y2NjMmQ5ZTg3YWYxNWM0YTU5YmU4MzUxMTljMDFhYjhmNTg0N2VlQx4FVg==: --dhchap-ctrl-secret DHHC-1:01:NGI3ZTEwNzI2N2VjNWIwYWRiNGQ0OWU4Yzk5OTM0NDSBP0yX: 00:19:50.549 22:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.549 22:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.549 22:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.549 22:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.549 22:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.549 22:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.549 22:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:50.549 22:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:50.806 22:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:50.806 22:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.806 22:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:50.806 22:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:50.806 22:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:50.806 22:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.806 22:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:50.806 22:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.806 22:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.806 22:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.806 22:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.806 22:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.370 00:19:51.370 22:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.370 22:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.370 22:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.627 22:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.627 22:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.627 22:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.627 22:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.627 22:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.627 22:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.627 { 00:19:51.627 "cntlid": 15, 00:19:51.627 "qid": 0, 00:19:51.627 "state": "enabled", 00:19:51.627 "listen_address": { 00:19:51.627 "trtype": "TCP", 00:19:51.627 "adrfam": "IPv4", 00:19:51.627 "traddr": "10.0.0.2", 00:19:51.627 "trsvcid": "4420" 00:19:51.627 }, 00:19:51.627 "peer_address": { 00:19:51.627 "trtype": "TCP", 00:19:51.627 "adrfam": "IPv4", 00:19:51.627 "traddr": "10.0.0.1", 00:19:51.627 "trsvcid": "45600" 00:19:51.627 }, 00:19:51.627 "auth": { 00:19:51.627 "state": "completed", 00:19:51.627 "digest": "sha256", 00:19:51.627 "dhgroup": "ffdhe2048" 00:19:51.627 } 00:19:51.627 } 00:19:51.627 ]' 00:19:51.627 22:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.627 22:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:51.627 22:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.627 22:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:51.627 22:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.627 22:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.627 22:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.627 22:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.884 22:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmMyMzIxM2NkMmEwMzkxYzEyMGFlOGIyMzdkOGUwODI2MWY0MWJhNWE2ZjlkYjU5ZTc0Nzg0NDUwNDU5YjZlYSl7zGw=: 00:19:52.816 22:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.816 22:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:52.816 22:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.816 22:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.816 22:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.816 22:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:52.816 22:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:52.816 22:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:52.817 22:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:53.074 22:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:53.074 22:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.074 22:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:53.074 22:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:53.074 22:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:53.074 22:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.074 22:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.074 22:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.074 22:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.074 22:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.074 22:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.074 22:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.639 00:19:53.639 22:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.639 22:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.640 22:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.640 22:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.640 22:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.640 22:49:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.640 22:49:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.640 22:49:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.640 22:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.640 { 00:19:53.640 "cntlid": 17, 00:19:53.640 "qid": 0, 00:19:53.640 "state": "enabled", 00:19:53.640 "listen_address": { 00:19:53.640 "trtype": "TCP", 00:19:53.640 "adrfam": "IPv4", 00:19:53.640 "traddr": "10.0.0.2", 00:19:53.640 "trsvcid": "4420" 00:19:53.640 }, 00:19:53.640 "peer_address": { 00:19:53.640 "trtype": "TCP", 00:19:53.640 "adrfam": "IPv4", 00:19:53.640 "traddr": "10.0.0.1", 00:19:53.640 "trsvcid": "45634" 00:19:53.640 }, 00:19:53.640 "auth": { 00:19:53.640 "state": "completed", 00:19:53.640 "digest": "sha256", 00:19:53.640 "dhgroup": "ffdhe3072" 00:19:53.640 } 00:19:53.640 } 00:19:53.640 ]' 00:19:53.640 22:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:53.898 22:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.898 22:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:53.898 22:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:53.898 22:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:53.898 22:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.898 22:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.898 22:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.156 22:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjJkMTEyYTQwNjkxNGQ5YzQwODVlNGMzZGI3ZDk3ZDQ5YmQxZDM2OWJlNjM3ZTdjd/AfXg==: --dhchap-ctrl-secret DHHC-1:03:NzQ5YjgxNGM3N2MzODU1YjIwNTJhYTQyMmRhMTgzZmY1YTVjYjNkMGY1NzkzYmNlMGQ1MGIyMDZhZDUwNjQzN/RBk0Q=: 00:19:55.089 22:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.089 22:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:55.089 22:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.089 22:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.089 22:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.089 22:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:55.089 22:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:55.089 22:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:55.347 22:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:55.347 22:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:55.347 22:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:55.347 22:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:55.347 22:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:55.347 22:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.347 22:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.347 22:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.347 22:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.347 22:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.347 22:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.347 22:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.913 00:19:55.913 22:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:55.913 22:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.913 22:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.171 22:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.171 22:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.171 22:49:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.171 22:49:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.171 22:49:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.171 22:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.171 { 00:19:56.171 "cntlid": 19, 00:19:56.171 "qid": 0, 00:19:56.171 "state": "enabled", 00:19:56.171 "listen_address": { 00:19:56.171 "trtype": "TCP", 00:19:56.171 "adrfam": "IPv4", 00:19:56.171 "traddr": "10.0.0.2", 00:19:56.171 "trsvcid": "4420" 00:19:56.171 }, 00:19:56.171 "peer_address": { 00:19:56.171 "trtype": "TCP", 00:19:56.171 "adrfam": "IPv4", 00:19:56.171 "traddr": "10.0.0.1", 00:19:56.171 "trsvcid": "45676" 00:19:56.171 }, 00:19:56.171 "auth": { 00:19:56.171 "state": "completed", 00:19:56.171 "digest": "sha256", 00:19:56.171 "dhgroup": "ffdhe3072" 00:19:56.171 } 00:19:56.171 } 00:19:56.171 ]' 00:19:56.171 22:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.171 22:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.171 22:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:56.171 22:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:56.171 22:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:56.171 22:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.171 22:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.171 22:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.430 22:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZGU4ZmY1YTQ1NjFhYTVkNmI5NWI1YjY0YzEyOTM3Y2N3iUfL: --dhchap-ctrl-secret DHHC-1:02:NmFhZDA5YzJiYTFmNTFiOGIyMjFkZmMzN2UzYzAxODE0YWQxYTZkMzEwYjc2ODJh8yuKtw==: 00:19:57.363 22:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.363 22:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.363 22:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.363 22:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.363 22:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.363 22:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:57.363 22:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:57.363 22:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:57.622 22:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:57.622 22:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:57.622 22:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:57.622 22:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:57.622 22:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:57.622 22:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.622 22:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.622 22:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.622 22:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.622 22:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.622 22:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.622 22:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.188 00:19:58.188 22:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.188 22:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.188 22:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.446 22:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.446 22:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.446 22:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.446 22:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.446 22:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.446 22:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.446 { 00:19:58.446 "cntlid": 21, 00:19:58.446 "qid": 0, 00:19:58.446 "state": "enabled", 00:19:58.446 "listen_address": { 00:19:58.446 "trtype": "TCP", 00:19:58.446 "adrfam": "IPv4", 00:19:58.446 "traddr": "10.0.0.2", 00:19:58.446 "trsvcid": "4420" 00:19:58.446 }, 00:19:58.446 "peer_address": { 00:19:58.446 "trtype": "TCP", 00:19:58.446 "adrfam": "IPv4", 00:19:58.446 "traddr": "10.0.0.1", 00:19:58.446 "trsvcid": "56926" 00:19:58.446 }, 00:19:58.446 "auth": { 00:19:58.446 "state": "completed", 00:19:58.446 "digest": "sha256", 00:19:58.446 "dhgroup": "ffdhe3072" 00:19:58.446 } 00:19:58.446 } 00:19:58.446 ]' 00:19:58.446 22:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.446 22:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.446 22:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.446 22:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:58.446 22:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.446 22:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.446 22:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.446 22:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.704 22:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGY4ZTc0MDc5Y2NjMmQ5ZTg3YWYxNWM0YTU5YmU4MzUxMTljMDFhYjhmNTg0N2VlQx4FVg==: --dhchap-ctrl-secret DHHC-1:01:NGI3ZTEwNzI2N2VjNWIwYWRiNGQ0OWU4Yzk5OTM0NDSBP0yX: 00:19:59.638 22:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.638 22:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:59.638 22:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.638 22:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.638 22:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.638 22:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.638 22:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:59.638 22:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:59.896 22:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:59.896 22:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.896 22:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:59.896 22:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:59.896 22:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:59.896 22:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.896 22:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:59.896 22:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.896 22:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.896 22:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.896 22:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:59.896 22:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:00.460 00:20:00.460 22:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.460 22:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.460 22:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.718 22:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.718 22:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.718 22:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.718 22:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.718 22:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.718 22:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.718 { 00:20:00.718 "cntlid": 23, 00:20:00.718 "qid": 0, 00:20:00.718 "state": "enabled", 00:20:00.718 "listen_address": { 00:20:00.718 "trtype": "TCP", 00:20:00.718 "adrfam": "IPv4", 00:20:00.718 "traddr": "10.0.0.2", 00:20:00.718 "trsvcid": "4420" 00:20:00.718 }, 00:20:00.718 "peer_address": { 00:20:00.718 "trtype": "TCP", 00:20:00.718 "adrfam": "IPv4", 00:20:00.718 "traddr": "10.0.0.1", 00:20:00.718 "trsvcid": "56956" 00:20:00.718 }, 00:20:00.718 "auth": { 00:20:00.718 "state": "completed", 00:20:00.718 "digest": "sha256", 00:20:00.718 "dhgroup": "ffdhe3072" 00:20:00.718 } 00:20:00.718 } 00:20:00.718 ]' 00:20:00.718 22:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.718 22:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.718 22:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.718 22:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:00.718 22:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.718 22:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.718 22:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.718 22:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.985 22:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmMyMzIxM2NkMmEwMzkxYzEyMGFlOGIyMzdkOGUwODI2MWY0MWJhNWE2ZjlkYjU5ZTc0Nzg0NDUwNDU5YjZlYSl7zGw=: 00:20:01.918 22:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.918 22:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:01.918 22:49:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.918 22:49:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.918 22:49:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.918 22:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:01.918 22:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.918 22:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:01.918 22:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:02.175 22:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:20:02.175 22:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.175 22:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:02.175 22:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:02.175 22:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:02.175 22:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.175 22:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.175 22:49:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.175 22:49:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.175 22:49:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.175 22:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.175 22:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.802 00:20:02.802 22:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:02.802 22:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:02.802 22:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.802 22:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.802 22:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.802 22:49:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.802 22:49:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.802 22:49:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.802 22:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:02.802 { 00:20:02.802 "cntlid": 25, 00:20:02.802 "qid": 0, 00:20:02.802 "state": "enabled", 00:20:02.802 "listen_address": { 00:20:02.802 "trtype": "TCP", 00:20:02.802 "adrfam": "IPv4", 00:20:02.802 "traddr": "10.0.0.2", 00:20:02.802 "trsvcid": "4420" 00:20:02.802 }, 00:20:02.802 "peer_address": { 00:20:02.802 "trtype": "TCP", 00:20:02.802 "adrfam": "IPv4", 00:20:02.802 "traddr": "10.0.0.1", 00:20:02.802 "trsvcid": "56986" 00:20:02.802 }, 00:20:02.802 "auth": { 00:20:02.802 "state": "completed", 00:20:02.802 "digest": "sha256", 00:20:02.802 "dhgroup": "ffdhe4096" 00:20:02.802 } 00:20:02.802 } 00:20:02.802 ]' 00:20:02.802 22:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.059 22:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.059 22:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.059 22:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:03.059 22:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:03.059 22:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.059 22:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.059 22:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.317 22:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjJkMTEyYTQwNjkxNGQ5YzQwODVlNGMzZGI3ZDk3ZDQ5YmQxZDM2OWJlNjM3ZTdjd/AfXg==: --dhchap-ctrl-secret DHHC-1:03:NzQ5YjgxNGM3N2MzODU1YjIwNTJhYTQyMmRhMTgzZmY1YTVjYjNkMGY1NzkzYmNlMGQ1MGIyMDZhZDUwNjQzN/RBk0Q=: 00:20:04.251 22:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.251 22:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.251 22:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.251 22:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.251 22:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.251 22:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:04.251 22:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:04.251 22:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:04.508 22:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:20:04.508 22:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:04.508 22:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:04.508 22:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:04.508 22:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:04.508 22:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.508 22:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.508 22:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.508 22:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.508 22:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.508 22:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.508 22:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.073 00:20:05.073 22:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:05.073 22:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:05.073 22:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.073 22:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.074 22:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.074 22:49:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.074 22:49:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.074 22:49:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.074 22:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:05.074 { 00:20:05.074 "cntlid": 27, 00:20:05.074 "qid": 0, 00:20:05.074 "state": "enabled", 00:20:05.074 "listen_address": { 00:20:05.074 "trtype": "TCP", 00:20:05.074 "adrfam": "IPv4", 00:20:05.074 "traddr": "10.0.0.2", 00:20:05.074 "trsvcid": "4420" 00:20:05.074 }, 00:20:05.074 "peer_address": { 00:20:05.074 "trtype": "TCP", 00:20:05.074 "adrfam": "IPv4", 00:20:05.074 "traddr": "10.0.0.1", 00:20:05.074 "trsvcid": "57002" 00:20:05.074 }, 00:20:05.074 "auth": { 00:20:05.074 "state": "completed", 00:20:05.074 "digest": "sha256", 00:20:05.074 "dhgroup": "ffdhe4096" 00:20:05.074 } 00:20:05.074 } 00:20:05.074 ]' 00:20:05.074 22:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:05.331 22:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:05.331 22:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:05.331 22:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:05.331 22:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.331 22:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.331 22:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.331 22:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.589 22:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZGU4ZmY1YTQ1NjFhYTVkNmI5NWI1YjY0YzEyOTM3Y2N3iUfL: --dhchap-ctrl-secret DHHC-1:02:NmFhZDA5YzJiYTFmNTFiOGIyMjFkZmMzN2UzYzAxODE0YWQxYTZkMzEwYjc2ODJh8yuKtw==: 00:20:06.520 22:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.520 22:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:06.520 22:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.520 22:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.520 22:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.520 22:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:06.520 22:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:06.520 22:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:06.776 22:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:20:06.776 22:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:06.776 22:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:06.776 22:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:06.776 22:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:06.776 22:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.776 22:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.776 22:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.776 22:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.776 22:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.776 22:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.776 22:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.340 00:20:07.340 22:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:07.340 22:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.340 22:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:07.340 22:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.340 22:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.340 22:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.340 22:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.597 22:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.597 22:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:07.597 { 00:20:07.597 "cntlid": 29, 00:20:07.597 "qid": 0, 00:20:07.597 "state": "enabled", 00:20:07.597 "listen_address": { 00:20:07.597 "trtype": "TCP", 00:20:07.597 "adrfam": "IPv4", 00:20:07.597 "traddr": "10.0.0.2", 00:20:07.597 "trsvcid": "4420" 00:20:07.597 }, 00:20:07.597 "peer_address": { 00:20:07.597 "trtype": "TCP", 00:20:07.597 "adrfam": "IPv4", 00:20:07.597 "traddr": "10.0.0.1", 00:20:07.597 "trsvcid": "43990" 00:20:07.597 }, 00:20:07.597 "auth": { 00:20:07.597 "state": "completed", 00:20:07.597 "digest": "sha256", 00:20:07.597 "dhgroup": "ffdhe4096" 00:20:07.597 } 00:20:07.597 } 00:20:07.597 ]' 00:20:07.597 22:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:07.597 22:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:07.597 22:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:07.597 22:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:07.597 22:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:07.597 22:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.597 22:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.597 22:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.854 22:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGY4ZTc0MDc5Y2NjMmQ5ZTg3YWYxNWM0YTU5YmU4MzUxMTljMDFhYjhmNTg0N2VlQx4FVg==: --dhchap-ctrl-secret DHHC-1:01:NGI3ZTEwNzI2N2VjNWIwYWRiNGQ0OWU4Yzk5OTM0NDSBP0yX: 00:20:08.784 22:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.784 22:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:08.784 22:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.784 22:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.784 22:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.784 22:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:08.784 22:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:08.784 22:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:09.042 22:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:20:09.042 22:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.042 22:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:09.042 22:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:09.042 22:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:09.042 22:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.042 22:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:09.042 22:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.042 22:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.042 22:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.042 22:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.042 22:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.606 00:20:09.606 22:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:09.606 22:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:09.606 22:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.606 22:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.606 22:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.606 22:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.606 22:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.606 22:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.606 22:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:09.606 { 00:20:09.606 "cntlid": 31, 00:20:09.606 "qid": 0, 00:20:09.606 "state": "enabled", 00:20:09.606 "listen_address": { 00:20:09.606 "trtype": "TCP", 00:20:09.606 "adrfam": "IPv4", 00:20:09.606 "traddr": "10.0.0.2", 00:20:09.606 "trsvcid": "4420" 00:20:09.606 }, 00:20:09.606 "peer_address": { 00:20:09.606 "trtype": "TCP", 00:20:09.606 "adrfam": "IPv4", 00:20:09.606 "traddr": "10.0.0.1", 00:20:09.606 "trsvcid": "44018" 00:20:09.606 }, 00:20:09.607 "auth": { 00:20:09.607 "state": "completed", 00:20:09.607 "digest": "sha256", 00:20:09.607 "dhgroup": "ffdhe4096" 00:20:09.607 } 00:20:09.607 } 00:20:09.607 ]' 00:20:09.607 22:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:09.864 22:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.864 22:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:09.864 22:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:09.864 22:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:09.864 22:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.864 22:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.864 22:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.120 22:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmMyMzIxM2NkMmEwMzkxYzEyMGFlOGIyMzdkOGUwODI2MWY0MWJhNWE2ZjlkYjU5ZTc0Nzg0NDUwNDU5YjZlYSl7zGw=: 00:20:11.049 22:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.049 22:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.049 22:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.049 22:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.049 22:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.049 22:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:11.049 22:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.049 22:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:11.049 22:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:11.306 22:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:20:11.306 22:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:11.306 22:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:11.306 22:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:11.306 22:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:11.306 22:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.306 22:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.306 22:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.306 22:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.306 22:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.306 22:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.306 22:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.871 00:20:11.871 22:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:11.871 22:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:11.871 22:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.129 22:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.129 22:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.129 22:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.129 22:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.129 22:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.129 22:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:12.129 { 00:20:12.129 "cntlid": 33, 00:20:12.129 "qid": 0, 00:20:12.129 "state": "enabled", 00:20:12.129 "listen_address": { 00:20:12.129 "trtype": "TCP", 00:20:12.129 "adrfam": "IPv4", 00:20:12.129 "traddr": "10.0.0.2", 00:20:12.129 "trsvcid": "4420" 00:20:12.129 }, 00:20:12.129 "peer_address": { 00:20:12.129 "trtype": "TCP", 00:20:12.129 "adrfam": "IPv4", 00:20:12.129 "traddr": "10.0.0.1", 00:20:12.129 "trsvcid": "44044" 00:20:12.129 }, 00:20:12.129 "auth": { 00:20:12.129 "state": "completed", 00:20:12.129 "digest": "sha256", 00:20:12.129 "dhgroup": "ffdhe6144" 00:20:12.129 } 00:20:12.129 } 00:20:12.129 ]' 00:20:12.129 22:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:12.129 22:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:12.129 22:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:12.129 22:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:12.129 22:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:12.386 22:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.386 22:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.386 22:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.643 22:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjJkMTEyYTQwNjkxNGQ5YzQwODVlNGMzZGI3ZDk3ZDQ5YmQxZDM2OWJlNjM3ZTdjd/AfXg==: --dhchap-ctrl-secret DHHC-1:03:NzQ5YjgxNGM3N2MzODU1YjIwNTJhYTQyMmRhMTgzZmY1YTVjYjNkMGY1NzkzYmNlMGQ1MGIyMDZhZDUwNjQzN/RBk0Q=: 00:20:13.575 22:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.575 22:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:13.575 22:50:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.575 22:50:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.575 22:50:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.575 22:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:13.575 22:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:13.575 22:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:13.833 22:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:20:13.833 22:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:13.833 22:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:13.833 22:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:13.833 22:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:13.833 22:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.833 22:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.833 22:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.833 22:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.833 22:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.833 22:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.833 22:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.398 00:20:14.398 22:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.398 22:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.398 22:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.655 22:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.655 22:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.655 22:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.655 22:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.655 22:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.655 22:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.655 { 00:20:14.655 "cntlid": 35, 00:20:14.655 "qid": 0, 00:20:14.655 "state": "enabled", 00:20:14.655 "listen_address": { 00:20:14.655 "trtype": "TCP", 00:20:14.655 "adrfam": "IPv4", 00:20:14.655 "traddr": "10.0.0.2", 00:20:14.655 "trsvcid": "4420" 00:20:14.655 }, 00:20:14.655 "peer_address": { 00:20:14.655 "trtype": "TCP", 00:20:14.655 "adrfam": "IPv4", 00:20:14.655 "traddr": "10.0.0.1", 00:20:14.655 "trsvcid": "44070" 00:20:14.655 }, 00:20:14.655 "auth": { 00:20:14.655 "state": "completed", 00:20:14.656 "digest": "sha256", 00:20:14.656 "dhgroup": "ffdhe6144" 00:20:14.656 } 00:20:14.656 } 00:20:14.656 ]' 00:20:14.656 22:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:14.656 22:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:14.656 22:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:14.656 22:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:14.656 22:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:14.656 22:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.656 22:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.656 22:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.913 22:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZGU4ZmY1YTQ1NjFhYTVkNmI5NWI1YjY0YzEyOTM3Y2N3iUfL: --dhchap-ctrl-secret DHHC-1:02:NmFhZDA5YzJiYTFmNTFiOGIyMjFkZmMzN2UzYzAxODE0YWQxYTZkMzEwYjc2ODJh8yuKtw==: 00:20:15.846 22:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.846 22:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:15.846 22:50:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.846 22:50:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.846 22:50:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.846 22:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:15.846 22:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:15.846 22:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:16.104 22:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:20:16.104 22:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:16.104 22:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:16.104 22:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:16.104 22:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:16.104 22:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.104 22:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.104 22:50:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.104 22:50:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.104 22:50:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.104 22:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.104 22:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.670 00:20:16.670 22:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.670 22:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.670 22:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.928 22:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.928 22:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.928 22:50:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.928 22:50:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.928 22:50:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.928 22:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.928 { 00:20:16.928 "cntlid": 37, 00:20:16.928 "qid": 0, 00:20:16.928 "state": "enabled", 00:20:16.928 "listen_address": { 00:20:16.928 "trtype": "TCP", 00:20:16.928 "adrfam": "IPv4", 00:20:16.928 "traddr": "10.0.0.2", 00:20:16.928 "trsvcid": "4420" 00:20:16.928 }, 00:20:16.928 "peer_address": { 00:20:16.928 "trtype": "TCP", 00:20:16.928 "adrfam": "IPv4", 00:20:16.928 "traddr": "10.0.0.1", 00:20:16.928 "trsvcid": "44092" 00:20:16.928 }, 00:20:16.928 "auth": { 00:20:16.928 "state": "completed", 00:20:16.928 "digest": "sha256", 00:20:16.928 "dhgroup": "ffdhe6144" 00:20:16.928 } 00:20:16.928 } 00:20:16.928 ]' 00:20:16.928 22:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.928 22:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:16.928 22:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.928 22:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:16.928 22:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.928 22:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.928 22:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.928 22:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.186 22:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGY4ZTc0MDc5Y2NjMmQ5ZTg3YWYxNWM0YTU5YmU4MzUxMTljMDFhYjhmNTg0N2VlQx4FVg==: --dhchap-ctrl-secret DHHC-1:01:NGI3ZTEwNzI2N2VjNWIwYWRiNGQ0OWU4Yzk5OTM0NDSBP0yX: 00:20:18.155 22:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.155 22:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.155 22:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.155 22:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.155 22:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.155 22:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:18.155 22:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:18.155 22:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:18.413 22:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:20:18.413 22:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:18.413 22:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:18.413 22:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:18.413 22:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:18.413 22:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.413 22:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:18.413 22:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.413 22:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.413 22:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.413 22:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:18.413 22:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:18.978 00:20:18.979 22:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:18.979 22:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.979 22:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.237 22:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.237 22:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.237 22:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.237 22:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.237 22:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.237 22:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:19.237 { 00:20:19.237 "cntlid": 39, 00:20:19.237 "qid": 0, 00:20:19.237 "state": "enabled", 00:20:19.237 "listen_address": { 00:20:19.237 "trtype": "TCP", 00:20:19.237 "adrfam": "IPv4", 00:20:19.237 "traddr": "10.0.0.2", 00:20:19.237 "trsvcid": "4420" 00:20:19.237 }, 00:20:19.237 "peer_address": { 00:20:19.237 "trtype": "TCP", 00:20:19.237 "adrfam": "IPv4", 00:20:19.237 "traddr": "10.0.0.1", 00:20:19.237 "trsvcid": "34704" 00:20:19.237 }, 00:20:19.237 "auth": { 00:20:19.237 "state": "completed", 00:20:19.237 "digest": "sha256", 00:20:19.237 "dhgroup": "ffdhe6144" 00:20:19.237 } 00:20:19.237 } 00:20:19.237 ]' 00:20:19.237 22:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:19.495 22:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:19.495 22:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:19.495 22:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:19.495 22:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:19.495 22:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.495 22:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.495 22:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.753 22:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmMyMzIxM2NkMmEwMzkxYzEyMGFlOGIyMzdkOGUwODI2MWY0MWJhNWE2ZjlkYjU5ZTc0Nzg0NDUwNDU5YjZlYSl7zGw=: 00:20:20.687 22:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.687 22:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:20.687 22:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.687 22:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.687 22:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.687 22:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:20.687 22:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:20.687 22:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:20.687 22:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:20.944 22:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:20:20.944 22:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:20.944 22:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:20.944 22:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:20.944 22:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:20.944 22:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.944 22:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.944 22:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.944 22:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.944 22:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.944 22:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.944 22:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.876 00:20:21.876 22:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:21.876 22:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.876 22:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:22.134 22:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.134 22:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.134 22:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.134 22:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.134 22:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.134 22:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:22.134 { 00:20:22.134 "cntlid": 41, 00:20:22.134 "qid": 0, 00:20:22.134 "state": "enabled", 00:20:22.134 "listen_address": { 00:20:22.134 "trtype": "TCP", 00:20:22.134 "adrfam": "IPv4", 00:20:22.134 "traddr": "10.0.0.2", 00:20:22.134 "trsvcid": "4420" 00:20:22.134 }, 00:20:22.134 "peer_address": { 00:20:22.134 "trtype": "TCP", 00:20:22.134 "adrfam": "IPv4", 00:20:22.134 "traddr": "10.0.0.1", 00:20:22.134 "trsvcid": "34730" 00:20:22.134 }, 00:20:22.134 "auth": { 00:20:22.134 "state": "completed", 00:20:22.134 "digest": "sha256", 00:20:22.134 "dhgroup": "ffdhe8192" 00:20:22.134 } 00:20:22.134 } 00:20:22.134 ]' 00:20:22.134 22:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:22.134 22:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:22.134 22:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:22.134 22:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:22.134 22:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:22.134 22:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.134 22:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.134 22:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.391 22:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjJkMTEyYTQwNjkxNGQ5YzQwODVlNGMzZGI3ZDk3ZDQ5YmQxZDM2OWJlNjM3ZTdjd/AfXg==: --dhchap-ctrl-secret DHHC-1:03:NzQ5YjgxNGM3N2MzODU1YjIwNTJhYTQyMmRhMTgzZmY1YTVjYjNkMGY1NzkzYmNlMGQ1MGIyMDZhZDUwNjQzN/RBk0Q=: 00:20:23.324 22:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.324 22:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:23.324 22:50:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.324 22:50:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.324 22:50:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.324 22:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:23.324 22:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:23.324 22:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:23.582 22:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:20:23.582 22:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.582 22:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:23.582 22:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:23.582 22:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:23.582 22:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.582 22:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.582 22:50:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.582 22:50:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.582 22:50:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.582 22:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.582 22:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.516 00:20:24.516 22:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:24.516 22:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:24.516 22:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.774 22:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.774 22:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.774 22:50:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.774 22:50:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.774 22:50:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.774 22:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:24.774 { 00:20:24.774 "cntlid": 43, 00:20:24.774 "qid": 0, 00:20:24.774 "state": "enabled", 00:20:24.774 "listen_address": { 00:20:24.774 "trtype": "TCP", 00:20:24.774 "adrfam": "IPv4", 00:20:24.774 "traddr": "10.0.0.2", 00:20:24.774 "trsvcid": "4420" 00:20:24.774 }, 00:20:24.774 "peer_address": { 00:20:24.774 "trtype": "TCP", 00:20:24.774 "adrfam": "IPv4", 00:20:24.774 "traddr": "10.0.0.1", 00:20:24.774 "trsvcid": "34740" 00:20:24.774 }, 00:20:24.774 "auth": { 00:20:24.774 "state": "completed", 00:20:24.774 "digest": "sha256", 00:20:24.774 "dhgroup": "ffdhe8192" 00:20:24.774 } 00:20:24.774 } 00:20:24.774 ]' 00:20:24.774 22:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:24.774 22:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:24.774 22:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:24.774 22:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:24.774 22:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:25.032 22:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.032 22:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.032 22:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.290 22:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZGU4ZmY1YTQ1NjFhYTVkNmI5NWI1YjY0YzEyOTM3Y2N3iUfL: --dhchap-ctrl-secret DHHC-1:02:NmFhZDA5YzJiYTFmNTFiOGIyMjFkZmMzN2UzYzAxODE0YWQxYTZkMzEwYjc2ODJh8yuKtw==: 00:20:26.223 22:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.223 22:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.223 22:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.223 22:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.223 22:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.223 22:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:26.223 22:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:26.223 22:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:26.481 22:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:20:26.481 22:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:26.481 22:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:26.481 22:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:26.481 22:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:26.481 22:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.481 22:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.481 22:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.481 22:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.481 22:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.481 22:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.481 22:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.414 00:20:27.414 22:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:27.414 22:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:27.414 22:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.414 22:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.414 22:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.414 22:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.414 22:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.414 22:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.414 22:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:27.414 { 00:20:27.414 "cntlid": 45, 00:20:27.414 "qid": 0, 00:20:27.414 "state": "enabled", 00:20:27.414 "listen_address": { 00:20:27.414 "trtype": "TCP", 00:20:27.414 "adrfam": "IPv4", 00:20:27.414 "traddr": "10.0.0.2", 00:20:27.414 "trsvcid": "4420" 00:20:27.414 }, 00:20:27.414 "peer_address": { 00:20:27.414 "trtype": "TCP", 00:20:27.414 "adrfam": "IPv4", 00:20:27.414 "traddr": "10.0.0.1", 00:20:27.414 "trsvcid": "34782" 00:20:27.414 }, 00:20:27.414 "auth": { 00:20:27.414 "state": "completed", 00:20:27.414 "digest": "sha256", 00:20:27.414 "dhgroup": "ffdhe8192" 00:20:27.414 } 00:20:27.414 } 00:20:27.414 ]' 00:20:27.414 22:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:27.671 22:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:27.671 22:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:27.671 22:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:27.671 22:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:27.671 22:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.671 22:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.671 22:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.929 22:50:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGY4ZTc0MDc5Y2NjMmQ5ZTg3YWYxNWM0YTU5YmU4MzUxMTljMDFhYjhmNTg0N2VlQx4FVg==: --dhchap-ctrl-secret DHHC-1:01:NGI3ZTEwNzI2N2VjNWIwYWRiNGQ0OWU4Yzk5OTM0NDSBP0yX: 00:20:28.861 22:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.861 22:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:28.861 22:50:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.861 22:50:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.861 22:50:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.861 22:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:28.861 22:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:28.861 22:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:29.118 22:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:20:29.118 22:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:29.118 22:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:29.118 22:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:29.118 22:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:29.118 22:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.118 22:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:29.118 22:50:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.118 22:50:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.118 22:50:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.118 22:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:29.118 22:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:30.049 00:20:30.049 22:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:30.049 22:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.049 22:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:30.306 22:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.306 22:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.306 22:50:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.306 22:50:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.306 22:50:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.306 22:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:30.306 { 00:20:30.306 "cntlid": 47, 00:20:30.306 "qid": 0, 00:20:30.306 "state": "enabled", 00:20:30.306 "listen_address": { 00:20:30.306 "trtype": "TCP", 00:20:30.306 "adrfam": "IPv4", 00:20:30.306 "traddr": "10.0.0.2", 00:20:30.306 "trsvcid": "4420" 00:20:30.306 }, 00:20:30.306 "peer_address": { 00:20:30.306 "trtype": "TCP", 00:20:30.306 "adrfam": "IPv4", 00:20:30.306 "traddr": "10.0.0.1", 00:20:30.306 "trsvcid": "34390" 00:20:30.306 }, 00:20:30.306 "auth": { 00:20:30.306 "state": "completed", 00:20:30.306 "digest": "sha256", 00:20:30.306 "dhgroup": "ffdhe8192" 00:20:30.306 } 00:20:30.306 } 00:20:30.306 ]' 00:20:30.306 22:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:30.306 22:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:30.306 22:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:30.306 22:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:30.306 22:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:30.306 22:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.306 22:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.307 22:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.564 22:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmMyMzIxM2NkMmEwMzkxYzEyMGFlOGIyMzdkOGUwODI2MWY0MWJhNWE2ZjlkYjU5ZTc0Nzg0NDUwNDU5YjZlYSl7zGw=: 00:20:31.496 22:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.496 22:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.496 22:50:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.496 22:50:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.496 22:50:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.496 22:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:31.496 22:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:31.496 22:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:31.496 22:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:31.496 22:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:31.755 22:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:20:31.755 22:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:31.755 22:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:31.755 22:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:31.755 22:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:31.755 22:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.755 22:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.755 22:50:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.755 22:50:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.755 22:50:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.755 22:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.755 22:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.320 00:20:32.320 22:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.320 22:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.320 22:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.320 22:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.320 22:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.320 22:50:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.320 22:50:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.320 22:50:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.320 22:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.320 { 00:20:32.320 "cntlid": 49, 00:20:32.320 "qid": 0, 00:20:32.320 "state": "enabled", 00:20:32.320 "listen_address": { 00:20:32.320 "trtype": "TCP", 00:20:32.320 "adrfam": "IPv4", 00:20:32.320 "traddr": "10.0.0.2", 00:20:32.320 "trsvcid": "4420" 00:20:32.320 }, 00:20:32.320 "peer_address": { 00:20:32.320 "trtype": "TCP", 00:20:32.320 "adrfam": "IPv4", 00:20:32.320 "traddr": "10.0.0.1", 00:20:32.320 "trsvcid": "34414" 00:20:32.320 }, 00:20:32.320 "auth": { 00:20:32.320 "state": "completed", 00:20:32.320 "digest": "sha384", 00:20:32.320 "dhgroup": "null" 00:20:32.320 } 00:20:32.320 } 00:20:32.320 ]' 00:20:32.320 22:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:32.577 22:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.577 22:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:32.577 22:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:32.577 22:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:32.577 22:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.577 22:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.577 22:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.834 22:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjJkMTEyYTQwNjkxNGQ5YzQwODVlNGMzZGI3ZDk3ZDQ5YmQxZDM2OWJlNjM3ZTdjd/AfXg==: --dhchap-ctrl-secret DHHC-1:03:NzQ5YjgxNGM3N2MzODU1YjIwNTJhYTQyMmRhMTgzZmY1YTVjYjNkMGY1NzkzYmNlMGQ1MGIyMDZhZDUwNjQzN/RBk0Q=: 00:20:33.813 22:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.813 22:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:33.813 22:50:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.813 22:50:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.813 22:50:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.813 22:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:33.813 22:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:33.813 22:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:34.070 22:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:20:34.070 22:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:34.070 22:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:34.070 22:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:34.070 22:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:34.070 22:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.070 22:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.070 22:50:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.070 22:50:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.071 22:50:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.071 22:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.071 22:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.328 00:20:34.328 22:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.328 22:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.328 22:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.586 22:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.586 22:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.586 22:50:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.586 22:50:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.586 22:50:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.586 22:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:34.586 { 00:20:34.586 "cntlid": 51, 00:20:34.586 "qid": 0, 00:20:34.586 "state": "enabled", 00:20:34.586 "listen_address": { 00:20:34.586 "trtype": "TCP", 00:20:34.586 "adrfam": "IPv4", 00:20:34.586 "traddr": "10.0.0.2", 00:20:34.586 "trsvcid": "4420" 00:20:34.586 }, 00:20:34.586 "peer_address": { 00:20:34.586 "trtype": "TCP", 00:20:34.586 "adrfam": "IPv4", 00:20:34.586 "traddr": "10.0.0.1", 00:20:34.586 "trsvcid": "34454" 00:20:34.586 }, 00:20:34.586 "auth": { 00:20:34.586 "state": "completed", 00:20:34.586 "digest": "sha384", 00:20:34.586 "dhgroup": "null" 00:20:34.586 } 00:20:34.586 } 00:20:34.586 ]' 00:20:34.586 22:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:34.586 22:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.586 22:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:34.586 22:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:34.586 22:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:34.843 22:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.843 22:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.843 22:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.099 22:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZGU4ZmY1YTQ1NjFhYTVkNmI5NWI1YjY0YzEyOTM3Y2N3iUfL: --dhchap-ctrl-secret DHHC-1:02:NmFhZDA5YzJiYTFmNTFiOGIyMjFkZmMzN2UzYzAxODE0YWQxYTZkMzEwYjc2ODJh8yuKtw==: 00:20:36.031 22:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.031 22:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:36.031 22:50:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.031 22:50:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.031 22:50:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.031 22:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:36.031 22:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:36.031 22:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:36.288 22:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:20:36.288 22:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:36.288 22:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:36.288 22:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:36.288 22:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:36.288 22:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.288 22:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.288 22:50:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.288 22:50:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.288 22:50:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.288 22:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.288 22:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.546 00:20:36.546 22:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:36.546 22:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:36.546 22:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.803 22:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.803 22:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.803 22:50:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.803 22:50:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.803 22:50:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.803 22:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:36.803 { 00:20:36.803 "cntlid": 53, 00:20:36.803 "qid": 0, 00:20:36.803 "state": "enabled", 00:20:36.803 "listen_address": { 00:20:36.803 "trtype": "TCP", 00:20:36.803 "adrfam": "IPv4", 00:20:36.803 "traddr": "10.0.0.2", 00:20:36.803 "trsvcid": "4420" 00:20:36.803 }, 00:20:36.803 "peer_address": { 00:20:36.803 "trtype": "TCP", 00:20:36.803 "adrfam": "IPv4", 00:20:36.803 "traddr": "10.0.0.1", 00:20:36.803 "trsvcid": "34490" 00:20:36.803 }, 00:20:36.803 "auth": { 00:20:36.803 "state": "completed", 00:20:36.803 "digest": "sha384", 00:20:36.803 "dhgroup": "null" 00:20:36.803 } 00:20:36.803 } 00:20:36.803 ]' 00:20:36.803 22:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:36.803 22:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:36.803 22:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:36.803 22:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:36.803 22:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:36.803 22:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.803 22:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.803 22:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.061 22:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGY4ZTc0MDc5Y2NjMmQ5ZTg3YWYxNWM0YTU5YmU4MzUxMTljMDFhYjhmNTg0N2VlQx4FVg==: --dhchap-ctrl-secret DHHC-1:01:NGI3ZTEwNzI2N2VjNWIwYWRiNGQ0OWU4Yzk5OTM0NDSBP0yX: 00:20:38.432 22:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.432 22:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:38.432 22:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.432 22:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.432 22:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.432 22:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:38.432 22:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:38.432 22:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:38.432 22:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:20:38.432 22:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.432 22:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:38.432 22:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:38.432 22:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:38.432 22:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.432 22:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:38.432 22:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.432 22:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.432 22:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.432 22:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:38.432 22:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:38.689 00:20:38.689 22:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:38.689 22:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.689 22:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:38.946 22:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.947 22:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.947 22:50:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.947 22:50:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.947 22:50:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.947 22:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:38.947 { 00:20:38.947 "cntlid": 55, 00:20:38.947 "qid": 0, 00:20:38.947 "state": "enabled", 00:20:38.947 "listen_address": { 00:20:38.947 "trtype": "TCP", 00:20:38.947 "adrfam": "IPv4", 00:20:38.947 "traddr": "10.0.0.2", 00:20:38.947 "trsvcid": "4420" 00:20:38.947 }, 00:20:38.947 "peer_address": { 00:20:38.947 "trtype": "TCP", 00:20:38.947 "adrfam": "IPv4", 00:20:38.947 "traddr": "10.0.0.1", 00:20:38.947 "trsvcid": "36820" 00:20:38.947 }, 00:20:38.947 "auth": { 00:20:38.947 "state": "completed", 00:20:38.947 "digest": "sha384", 00:20:38.947 "dhgroup": "null" 00:20:38.947 } 00:20:38.947 } 00:20:38.947 ]' 00:20:38.947 22:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:38.947 22:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:38.947 22:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:38.947 22:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:38.947 22:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:39.204 22:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.204 22:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.204 22:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.461 22:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmMyMzIxM2NkMmEwMzkxYzEyMGFlOGIyMzdkOGUwODI2MWY0MWJhNWE2ZjlkYjU5ZTc0Nzg0NDUwNDU5YjZlYSl7zGw=: 00:20:40.394 22:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.394 22:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.394 22:50:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.394 22:50:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.394 22:50:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.394 22:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:40.394 22:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:40.394 22:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:40.394 22:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:40.652 22:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:20:40.652 22:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:40.652 22:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:40.652 22:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:40.652 22:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:40.652 22:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.652 22:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.652 22:50:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.652 22:50:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.652 22:50:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.652 22:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.652 22:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.908 00:20:40.908 22:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:40.908 22:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:40.909 22:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.165 22:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.165 22:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.165 22:50:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.165 22:50:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.165 22:50:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.165 22:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:41.165 { 00:20:41.165 "cntlid": 57, 00:20:41.165 "qid": 0, 00:20:41.165 "state": "enabled", 00:20:41.165 "listen_address": { 00:20:41.165 "trtype": "TCP", 00:20:41.165 "adrfam": "IPv4", 00:20:41.165 "traddr": "10.0.0.2", 00:20:41.165 "trsvcid": "4420" 00:20:41.165 }, 00:20:41.165 "peer_address": { 00:20:41.165 "trtype": "TCP", 00:20:41.165 "adrfam": "IPv4", 00:20:41.165 "traddr": "10.0.0.1", 00:20:41.165 "trsvcid": "36858" 00:20:41.165 }, 00:20:41.165 "auth": { 00:20:41.165 "state": "completed", 00:20:41.165 "digest": "sha384", 00:20:41.165 "dhgroup": "ffdhe2048" 00:20:41.165 } 00:20:41.165 } 00:20:41.165 ]' 00:20:41.165 22:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:41.165 22:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.165 22:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:41.165 22:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:41.165 22:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:41.165 22:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.165 22:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.165 22:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.423 22:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjJkMTEyYTQwNjkxNGQ5YzQwODVlNGMzZGI3ZDk3ZDQ5YmQxZDM2OWJlNjM3ZTdjd/AfXg==: --dhchap-ctrl-secret DHHC-1:03:NzQ5YjgxNGM3N2MzODU1YjIwNTJhYTQyMmRhMTgzZmY1YTVjYjNkMGY1NzkzYmNlMGQ1MGIyMDZhZDUwNjQzN/RBk0Q=: 00:20:42.357 22:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.357 22:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.357 22:50:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.357 22:50:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.357 22:50:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.357 22:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:42.357 22:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:42.357 22:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:42.615 22:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:20:42.615 22:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:42.615 22:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:42.615 22:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:42.615 22:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:42.615 22:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.615 22:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.615 22:50:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.615 22:50:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.873 22:50:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.873 22:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.873 22:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.131 00:20:43.131 22:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:43.131 22:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:43.131 22:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.389 22:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.389 22:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.389 22:50:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.389 22:50:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.389 22:50:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.389 22:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:43.389 { 00:20:43.389 "cntlid": 59, 00:20:43.389 "qid": 0, 00:20:43.389 "state": "enabled", 00:20:43.389 "listen_address": { 00:20:43.389 "trtype": "TCP", 00:20:43.389 "adrfam": "IPv4", 00:20:43.389 "traddr": "10.0.0.2", 00:20:43.389 "trsvcid": "4420" 00:20:43.389 }, 00:20:43.389 "peer_address": { 00:20:43.389 "trtype": "TCP", 00:20:43.389 "adrfam": "IPv4", 00:20:43.389 "traddr": "10.0.0.1", 00:20:43.389 "trsvcid": "36890" 00:20:43.389 }, 00:20:43.389 "auth": { 00:20:43.389 "state": "completed", 00:20:43.389 "digest": "sha384", 00:20:43.389 "dhgroup": "ffdhe2048" 00:20:43.389 } 00:20:43.389 } 00:20:43.389 ]' 00:20:43.389 22:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:43.389 22:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.389 22:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:43.389 22:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:43.389 22:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:43.389 22:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.389 22:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.389 22:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.647 22:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZGU4ZmY1YTQ1NjFhYTVkNmI5NWI1YjY0YzEyOTM3Y2N3iUfL: --dhchap-ctrl-secret DHHC-1:02:NmFhZDA5YzJiYTFmNTFiOGIyMjFkZmMzN2UzYzAxODE0YWQxYTZkMzEwYjc2ODJh8yuKtw==: 00:20:44.581 22:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.581 22:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:44.581 22:50:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.581 22:50:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.581 22:50:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.581 22:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:44.581 22:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:44.581 22:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:44.838 22:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:20:44.838 22:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:44.839 22:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:44.839 22:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:44.839 22:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:44.839 22:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.839 22:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.839 22:50:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.839 22:50:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.096 22:50:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.096 22:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.096 22:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.355 00:20:45.355 22:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:45.355 22:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.355 22:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:45.613 22:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.613 22:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.613 22:50:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.613 22:50:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.613 22:50:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.613 22:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:45.613 { 00:20:45.613 "cntlid": 61, 00:20:45.613 "qid": 0, 00:20:45.613 "state": "enabled", 00:20:45.613 "listen_address": { 00:20:45.613 "trtype": "TCP", 00:20:45.613 "adrfam": "IPv4", 00:20:45.613 "traddr": "10.0.0.2", 00:20:45.613 "trsvcid": "4420" 00:20:45.613 }, 00:20:45.613 "peer_address": { 00:20:45.613 "trtype": "TCP", 00:20:45.613 "adrfam": "IPv4", 00:20:45.613 "traddr": "10.0.0.1", 00:20:45.613 "trsvcid": "36918" 00:20:45.613 }, 00:20:45.613 "auth": { 00:20:45.613 "state": "completed", 00:20:45.613 "digest": "sha384", 00:20:45.613 "dhgroup": "ffdhe2048" 00:20:45.613 } 00:20:45.613 } 00:20:45.613 ]' 00:20:45.613 22:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:45.613 22:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.613 22:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:45.613 22:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:45.613 22:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:45.613 22:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.613 22:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.613 22:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.871 22:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGY4ZTc0MDc5Y2NjMmQ5ZTg3YWYxNWM0YTU5YmU4MzUxMTljMDFhYjhmNTg0N2VlQx4FVg==: --dhchap-ctrl-secret DHHC-1:01:NGI3ZTEwNzI2N2VjNWIwYWRiNGQ0OWU4Yzk5OTM0NDSBP0yX: 00:20:46.805 22:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.805 22:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.805 22:50:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.805 22:50:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.805 22:50:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.805 22:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:46.805 22:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:46.805 22:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:47.062 22:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:20:47.062 22:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:47.062 22:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:47.062 22:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:47.062 22:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:47.062 22:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.062 22:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:47.062 22:50:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.062 22:50:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.062 22:50:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.062 22:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:47.062 22:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:47.320 00:20:47.320 22:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:47.320 22:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:47.320 22:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.579 22:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.579 22:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.579 22:50:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.579 22:50:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.579 22:50:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.579 22:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:47.579 { 00:20:47.579 "cntlid": 63, 00:20:47.579 "qid": 0, 00:20:47.579 "state": "enabled", 00:20:47.579 "listen_address": { 00:20:47.579 "trtype": "TCP", 00:20:47.579 "adrfam": "IPv4", 00:20:47.579 "traddr": "10.0.0.2", 00:20:47.579 "trsvcid": "4420" 00:20:47.579 }, 00:20:47.579 "peer_address": { 00:20:47.579 "trtype": "TCP", 00:20:47.579 "adrfam": "IPv4", 00:20:47.579 "traddr": "10.0.0.1", 00:20:47.579 "trsvcid": "54114" 00:20:47.579 }, 00:20:47.579 "auth": { 00:20:47.579 "state": "completed", 00:20:47.579 "digest": "sha384", 00:20:47.579 "dhgroup": "ffdhe2048" 00:20:47.579 } 00:20:47.579 } 00:20:47.579 ]' 00:20:47.579 22:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:47.837 22:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.837 22:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:47.837 22:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:47.837 22:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:47.837 22:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.837 22:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.837 22:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.097 22:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmMyMzIxM2NkMmEwMzkxYzEyMGFlOGIyMzdkOGUwODI2MWY0MWJhNWE2ZjlkYjU5ZTc0Nzg0NDUwNDU5YjZlYSl7zGw=: 00:20:49.066 22:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.066 22:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.066 22:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.066 22:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.066 22:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.066 22:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:49.066 22:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:49.066 22:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:49.066 22:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:49.324 22:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:20:49.324 22:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:49.324 22:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:49.324 22:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:49.324 22:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:49.324 22:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.324 22:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.324 22:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.324 22:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.324 22:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.324 22:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.324 22:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.582 00:20:49.582 22:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:49.582 22:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:49.582 22:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.840 22:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.840 22:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.840 22:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.840 22:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.840 22:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.840 22:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:49.840 { 00:20:49.840 "cntlid": 65, 00:20:49.840 "qid": 0, 00:20:49.840 "state": "enabled", 00:20:49.840 "listen_address": { 00:20:49.840 "trtype": "TCP", 00:20:49.840 "adrfam": "IPv4", 00:20:49.840 "traddr": "10.0.0.2", 00:20:49.840 "trsvcid": "4420" 00:20:49.840 }, 00:20:49.840 "peer_address": { 00:20:49.840 "trtype": "TCP", 00:20:49.840 "adrfam": "IPv4", 00:20:49.840 "traddr": "10.0.0.1", 00:20:49.840 "trsvcid": "54150" 00:20:49.840 }, 00:20:49.840 "auth": { 00:20:49.840 "state": "completed", 00:20:49.840 "digest": "sha384", 00:20:49.840 "dhgroup": "ffdhe3072" 00:20:49.840 } 00:20:49.840 } 00:20:49.840 ]' 00:20:49.840 22:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:49.840 22:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:49.840 22:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:49.840 22:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:49.840 22:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:50.097 22:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.097 22:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.097 22:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.355 22:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjJkMTEyYTQwNjkxNGQ5YzQwODVlNGMzZGI3ZDk3ZDQ5YmQxZDM2OWJlNjM3ZTdjd/AfXg==: --dhchap-ctrl-secret DHHC-1:03:NzQ5YjgxNGM3N2MzODU1YjIwNTJhYTQyMmRhMTgzZmY1YTVjYjNkMGY1NzkzYmNlMGQ1MGIyMDZhZDUwNjQzN/RBk0Q=: 00:20:51.285 22:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.285 22:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.285 22:50:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.285 22:50:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.285 22:50:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.285 22:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:51.285 22:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:51.285 22:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:51.543 22:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:20:51.543 22:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:51.543 22:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:51.543 22:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:51.543 22:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:51.543 22:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.544 22:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.544 22:50:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.544 22:50:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.544 22:50:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.544 22:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.544 22:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.802 00:20:51.802 22:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:51.802 22:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.802 22:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:52.060 22:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.060 22:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.060 22:50:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.060 22:50:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.060 22:50:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.060 22:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:52.060 { 00:20:52.060 "cntlid": 67, 00:20:52.060 "qid": 0, 00:20:52.060 "state": "enabled", 00:20:52.060 "listen_address": { 00:20:52.060 "trtype": "TCP", 00:20:52.060 "adrfam": "IPv4", 00:20:52.060 "traddr": "10.0.0.2", 00:20:52.060 "trsvcid": "4420" 00:20:52.060 }, 00:20:52.060 "peer_address": { 00:20:52.060 "trtype": "TCP", 00:20:52.060 "adrfam": "IPv4", 00:20:52.060 "traddr": "10.0.0.1", 00:20:52.060 "trsvcid": "54174" 00:20:52.060 }, 00:20:52.060 "auth": { 00:20:52.060 "state": "completed", 00:20:52.060 "digest": "sha384", 00:20:52.060 "dhgroup": "ffdhe3072" 00:20:52.060 } 00:20:52.060 } 00:20:52.060 ]' 00:20:52.060 22:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:52.060 22:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.060 22:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:52.060 22:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:52.060 22:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:52.317 22:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.318 22:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.318 22:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.575 22:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZGU4ZmY1YTQ1NjFhYTVkNmI5NWI1YjY0YzEyOTM3Y2N3iUfL: --dhchap-ctrl-secret DHHC-1:02:NmFhZDA5YzJiYTFmNTFiOGIyMjFkZmMzN2UzYzAxODE0YWQxYTZkMzEwYjc2ODJh8yuKtw==: 00:20:53.509 22:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.509 22:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.509 22:50:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.509 22:50:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.509 22:50:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.509 22:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:53.509 22:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:53.509 22:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:53.767 22:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:20:53.767 22:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:53.767 22:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:53.767 22:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:53.767 22:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:53.767 22:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.767 22:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.767 22:50:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.767 22:50:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.767 22:50:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.767 22:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.767 22:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.025 00:20:54.025 22:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:54.025 22:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.025 22:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:54.283 22:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.283 22:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.283 22:50:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.283 22:50:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.283 22:50:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.283 22:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:54.283 { 00:20:54.283 "cntlid": 69, 00:20:54.283 "qid": 0, 00:20:54.283 "state": "enabled", 00:20:54.283 "listen_address": { 00:20:54.283 "trtype": "TCP", 00:20:54.283 "adrfam": "IPv4", 00:20:54.283 "traddr": "10.0.0.2", 00:20:54.283 "trsvcid": "4420" 00:20:54.283 }, 00:20:54.283 "peer_address": { 00:20:54.283 "trtype": "TCP", 00:20:54.283 "adrfam": "IPv4", 00:20:54.283 "traddr": "10.0.0.1", 00:20:54.283 "trsvcid": "54192" 00:20:54.283 }, 00:20:54.283 "auth": { 00:20:54.283 "state": "completed", 00:20:54.283 "digest": "sha384", 00:20:54.283 "dhgroup": "ffdhe3072" 00:20:54.283 } 00:20:54.283 } 00:20:54.283 ]' 00:20:54.283 22:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:54.283 22:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.283 22:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:54.541 22:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:54.541 22:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:54.541 22:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.541 22:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.541 22:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.799 22:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGY4ZTc0MDc5Y2NjMmQ5ZTg3YWYxNWM0YTU5YmU4MzUxMTljMDFhYjhmNTg0N2VlQx4FVg==: --dhchap-ctrl-secret DHHC-1:01:NGI3ZTEwNzI2N2VjNWIwYWRiNGQ0OWU4Yzk5OTM0NDSBP0yX: 00:20:55.731 22:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.731 22:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:55.731 22:50:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.731 22:50:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.731 22:50:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.731 22:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:55.731 22:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:55.731 22:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:55.989 22:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:55.989 22:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:55.989 22:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:55.989 22:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:55.989 22:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:55.989 22:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.989 22:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:55.989 22:50:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.989 22:50:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.989 22:50:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.990 22:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:55.990 22:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:56.247 00:20:56.247 22:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:56.247 22:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.247 22:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:56.505 22:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.505 22:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.505 22:50:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.505 22:50:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.505 22:50:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.505 22:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:56.505 { 00:20:56.505 "cntlid": 71, 00:20:56.505 "qid": 0, 00:20:56.505 "state": "enabled", 00:20:56.505 "listen_address": { 00:20:56.505 "trtype": "TCP", 00:20:56.505 "adrfam": "IPv4", 00:20:56.505 "traddr": "10.0.0.2", 00:20:56.505 "trsvcid": "4420" 00:20:56.505 }, 00:20:56.505 "peer_address": { 00:20:56.505 "trtype": "TCP", 00:20:56.505 "adrfam": "IPv4", 00:20:56.505 "traddr": "10.0.0.1", 00:20:56.505 "trsvcid": "54224" 00:20:56.505 }, 00:20:56.505 "auth": { 00:20:56.505 "state": "completed", 00:20:56.505 "digest": "sha384", 00:20:56.505 "dhgroup": "ffdhe3072" 00:20:56.505 } 00:20:56.505 } 00:20:56.505 ]' 00:20:56.505 22:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:56.763 22:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:56.763 22:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:56.763 22:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:56.763 22:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:56.763 22:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.763 22:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.763 22:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.020 22:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmMyMzIxM2NkMmEwMzkxYzEyMGFlOGIyMzdkOGUwODI2MWY0MWJhNWE2ZjlkYjU5ZTc0Nzg0NDUwNDU5YjZlYSl7zGw=: 00:20:57.952 22:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.952 22:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.952 22:50:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.952 22:50:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.952 22:50:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.952 22:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:57.952 22:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:57.952 22:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:57.952 22:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:58.210 22:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:58.210 22:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:58.210 22:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:58.210 22:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:58.210 22:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:58.210 22:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.210 22:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.210 22:50:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.210 22:50:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.210 22:50:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.210 22:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.210 22:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.775 00:20:58.775 22:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:58.775 22:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:58.775 22:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.775 22:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.775 22:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.775 22:50:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.775 22:50:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.775 22:50:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.775 22:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:58.775 { 00:20:58.775 "cntlid": 73, 00:20:58.775 "qid": 0, 00:20:58.775 "state": "enabled", 00:20:58.775 "listen_address": { 00:20:58.775 "trtype": "TCP", 00:20:58.775 "adrfam": "IPv4", 00:20:58.775 "traddr": "10.0.0.2", 00:20:58.775 "trsvcid": "4420" 00:20:58.775 }, 00:20:58.775 "peer_address": { 00:20:58.775 "trtype": "TCP", 00:20:58.775 "adrfam": "IPv4", 00:20:58.775 "traddr": "10.0.0.1", 00:20:58.775 "trsvcid": "60060" 00:20:58.775 }, 00:20:58.775 "auth": { 00:20:58.775 "state": "completed", 00:20:58.775 "digest": "sha384", 00:20:58.775 "dhgroup": "ffdhe4096" 00:20:58.775 } 00:20:58.775 } 00:20:58.775 ]' 00:20:58.775 22:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:59.032 22:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:59.032 22:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:59.032 22:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:59.032 22:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:59.032 22:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.032 22:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.032 22:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.289 22:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjJkMTEyYTQwNjkxNGQ5YzQwODVlNGMzZGI3ZDk3ZDQ5YmQxZDM2OWJlNjM3ZTdjd/AfXg==: --dhchap-ctrl-secret DHHC-1:03:NzQ5YjgxNGM3N2MzODU1YjIwNTJhYTQyMmRhMTgzZmY1YTVjYjNkMGY1NzkzYmNlMGQ1MGIyMDZhZDUwNjQzN/RBk0Q=: 00:21:00.221 22:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.221 22:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.221 22:50:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.221 22:50:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.221 22:50:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.221 22:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:00.221 22:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:00.221 22:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:00.478 22:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:21:00.478 22:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:00.478 22:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:00.478 22:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:00.478 22:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:00.478 22:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.478 22:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.478 22:50:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.478 22:50:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.478 22:50:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.479 22:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.479 22:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.042 00:21:01.042 22:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:01.042 22:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:01.042 22:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.299 22:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.299 22:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.299 22:50:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.299 22:50:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.299 22:50:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.299 22:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:01.299 { 00:21:01.299 "cntlid": 75, 00:21:01.299 "qid": 0, 00:21:01.299 "state": "enabled", 00:21:01.299 "listen_address": { 00:21:01.299 "trtype": "TCP", 00:21:01.299 "adrfam": "IPv4", 00:21:01.299 "traddr": "10.0.0.2", 00:21:01.299 "trsvcid": "4420" 00:21:01.299 }, 00:21:01.299 "peer_address": { 00:21:01.299 "trtype": "TCP", 00:21:01.299 "adrfam": "IPv4", 00:21:01.299 "traddr": "10.0.0.1", 00:21:01.299 "trsvcid": "60070" 00:21:01.299 }, 00:21:01.299 "auth": { 00:21:01.299 "state": "completed", 00:21:01.299 "digest": "sha384", 00:21:01.299 "dhgroup": "ffdhe4096" 00:21:01.299 } 00:21:01.299 } 00:21:01.299 ]' 00:21:01.299 22:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:01.299 22:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:01.299 22:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:01.299 22:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:01.299 22:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:01.299 22:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.299 22:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.299 22:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.556 22:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZGU4ZmY1YTQ1NjFhYTVkNmI5NWI1YjY0YzEyOTM3Y2N3iUfL: --dhchap-ctrl-secret DHHC-1:02:NmFhZDA5YzJiYTFmNTFiOGIyMjFkZmMzN2UzYzAxODE0YWQxYTZkMzEwYjc2ODJh8yuKtw==: 00:21:02.486 22:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.486 22:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.486 22:50:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.486 22:50:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.486 22:50:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.486 22:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:02.486 22:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:02.486 22:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:02.743 22:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:21:02.743 22:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:02.743 22:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:02.743 22:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:02.743 22:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:02.743 22:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.743 22:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.743 22:50:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.743 22:50:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.743 22:50:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.743 22:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.743 22:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.370 00:21:03.370 22:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:03.370 22:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:03.370 22:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.628 22:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.628 22:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.628 22:50:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.628 22:50:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.628 22:50:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.628 22:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:03.628 { 00:21:03.628 "cntlid": 77, 00:21:03.628 "qid": 0, 00:21:03.628 "state": "enabled", 00:21:03.628 "listen_address": { 00:21:03.628 "trtype": "TCP", 00:21:03.628 "adrfam": "IPv4", 00:21:03.628 "traddr": "10.0.0.2", 00:21:03.628 "trsvcid": "4420" 00:21:03.628 }, 00:21:03.628 "peer_address": { 00:21:03.628 "trtype": "TCP", 00:21:03.628 "adrfam": "IPv4", 00:21:03.628 "traddr": "10.0.0.1", 00:21:03.628 "trsvcid": "60096" 00:21:03.628 }, 00:21:03.628 "auth": { 00:21:03.628 "state": "completed", 00:21:03.628 "digest": "sha384", 00:21:03.628 "dhgroup": "ffdhe4096" 00:21:03.628 } 00:21:03.628 } 00:21:03.628 ]' 00:21:03.628 22:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:03.628 22:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:03.628 22:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:03.628 22:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:03.628 22:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:03.628 22:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.628 22:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.628 22:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.886 22:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGY4ZTc0MDc5Y2NjMmQ5ZTg3YWYxNWM0YTU5YmU4MzUxMTljMDFhYjhmNTg0N2VlQx4FVg==: --dhchap-ctrl-secret DHHC-1:01:NGI3ZTEwNzI2N2VjNWIwYWRiNGQ0OWU4Yzk5OTM0NDSBP0yX: 00:21:04.819 22:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.819 22:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:04.819 22:50:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.819 22:50:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.819 22:50:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.819 22:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:04.819 22:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:04.819 22:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:05.077 22:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:21:05.077 22:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:05.077 22:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:05.077 22:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:05.077 22:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:05.077 22:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.077 22:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:05.077 22:50:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.077 22:50:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.077 22:50:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.077 22:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:05.077 22:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:05.643 00:21:05.643 22:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:05.643 22:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:05.643 22:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.900 22:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.900 22:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.900 22:50:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.900 22:50:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.900 22:50:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.900 22:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:05.900 { 00:21:05.900 "cntlid": 79, 00:21:05.900 "qid": 0, 00:21:05.900 "state": "enabled", 00:21:05.900 "listen_address": { 00:21:05.900 "trtype": "TCP", 00:21:05.900 "adrfam": "IPv4", 00:21:05.901 "traddr": "10.0.0.2", 00:21:05.901 "trsvcid": "4420" 00:21:05.901 }, 00:21:05.901 "peer_address": { 00:21:05.901 "trtype": "TCP", 00:21:05.901 "adrfam": "IPv4", 00:21:05.901 "traddr": "10.0.0.1", 00:21:05.901 "trsvcid": "60134" 00:21:05.901 }, 00:21:05.901 "auth": { 00:21:05.901 "state": "completed", 00:21:05.901 "digest": "sha384", 00:21:05.901 "dhgroup": "ffdhe4096" 00:21:05.901 } 00:21:05.901 } 00:21:05.901 ]' 00:21:05.901 22:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:05.901 22:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:05.901 22:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:05.901 22:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:05.901 22:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:05.901 22:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.901 22:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.901 22:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.158 22:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmMyMzIxM2NkMmEwMzkxYzEyMGFlOGIyMzdkOGUwODI2MWY0MWJhNWE2ZjlkYjU5ZTc0Nzg0NDUwNDU5YjZlYSl7zGw=: 00:21:07.091 22:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.091 22:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:07.091 22:50:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.091 22:50:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.091 22:50:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.091 22:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:07.091 22:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:07.091 22:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:07.091 22:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:07.349 22:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:21:07.349 22:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:07.349 22:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:07.349 22:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:07.349 22:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:07.349 22:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.349 22:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.349 22:50:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.349 22:50:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.349 22:50:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.349 22:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.349 22:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.915 00:21:07.915 22:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:07.915 22:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.915 22:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:08.173 22:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.173 22:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.173 22:51:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.173 22:51:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.173 22:51:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.173 22:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:08.173 { 00:21:08.173 "cntlid": 81, 00:21:08.173 "qid": 0, 00:21:08.173 "state": "enabled", 00:21:08.173 "listen_address": { 00:21:08.173 "trtype": "TCP", 00:21:08.173 "adrfam": "IPv4", 00:21:08.173 "traddr": "10.0.0.2", 00:21:08.173 "trsvcid": "4420" 00:21:08.173 }, 00:21:08.173 "peer_address": { 00:21:08.173 "trtype": "TCP", 00:21:08.173 "adrfam": "IPv4", 00:21:08.173 "traddr": "10.0.0.1", 00:21:08.173 "trsvcid": "58974" 00:21:08.173 }, 00:21:08.173 "auth": { 00:21:08.173 "state": "completed", 00:21:08.173 "digest": "sha384", 00:21:08.173 "dhgroup": "ffdhe6144" 00:21:08.173 } 00:21:08.173 } 00:21:08.173 ]' 00:21:08.173 22:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:08.431 22:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:08.431 22:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:08.431 22:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:08.431 22:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:08.432 22:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.432 22:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.432 22:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.689 22:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjJkMTEyYTQwNjkxNGQ5YzQwODVlNGMzZGI3ZDk3ZDQ5YmQxZDM2OWJlNjM3ZTdjd/AfXg==: --dhchap-ctrl-secret DHHC-1:03:NzQ5YjgxNGM3N2MzODU1YjIwNTJhYTQyMmRhMTgzZmY1YTVjYjNkMGY1NzkzYmNlMGQ1MGIyMDZhZDUwNjQzN/RBk0Q=: 00:21:09.623 22:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.623 22:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:09.623 22:51:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.623 22:51:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.623 22:51:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.623 22:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:09.623 22:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:09.623 22:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:09.881 22:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:21:09.881 22:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:09.881 22:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:09.881 22:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:09.881 22:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:09.881 22:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.881 22:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.881 22:51:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.881 22:51:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.881 22:51:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.881 22:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.881 22:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.446 00:21:10.446 22:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:10.446 22:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:10.446 22:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.704 22:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.704 22:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.704 22:51:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.704 22:51:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.704 22:51:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.704 22:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:10.704 { 00:21:10.704 "cntlid": 83, 00:21:10.704 "qid": 0, 00:21:10.704 "state": "enabled", 00:21:10.704 "listen_address": { 00:21:10.704 "trtype": "TCP", 00:21:10.704 "adrfam": "IPv4", 00:21:10.704 "traddr": "10.0.0.2", 00:21:10.704 "trsvcid": "4420" 00:21:10.704 }, 00:21:10.704 "peer_address": { 00:21:10.704 "trtype": "TCP", 00:21:10.704 "adrfam": "IPv4", 00:21:10.704 "traddr": "10.0.0.1", 00:21:10.704 "trsvcid": "59004" 00:21:10.704 }, 00:21:10.704 "auth": { 00:21:10.704 "state": "completed", 00:21:10.704 "digest": "sha384", 00:21:10.704 "dhgroup": "ffdhe6144" 00:21:10.704 } 00:21:10.704 } 00:21:10.704 ]' 00:21:10.704 22:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:10.704 22:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:10.704 22:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:10.704 22:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:10.704 22:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:10.704 22:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.704 22:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.704 22:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.964 22:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZGU4ZmY1YTQ1NjFhYTVkNmI5NWI1YjY0YzEyOTM3Y2N3iUfL: --dhchap-ctrl-secret DHHC-1:02:NmFhZDA5YzJiYTFmNTFiOGIyMjFkZmMzN2UzYzAxODE0YWQxYTZkMzEwYjc2ODJh8yuKtw==: 00:21:11.896 22:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.896 22:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:11.896 22:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.896 22:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.896 22:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.896 22:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:11.896 22:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:11.896 22:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:12.154 22:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:21:12.154 22:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:12.154 22:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:12.154 22:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:12.154 22:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:12.154 22:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.154 22:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.154 22:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.154 22:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.154 22:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.154 22:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.154 22:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.088 00:21:13.088 22:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:13.088 22:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.088 22:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:13.088 22:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.088 22:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.088 22:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.088 22:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.088 22:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.088 22:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:13.088 { 00:21:13.088 "cntlid": 85, 00:21:13.088 "qid": 0, 00:21:13.088 "state": "enabled", 00:21:13.088 "listen_address": { 00:21:13.088 "trtype": "TCP", 00:21:13.088 "adrfam": "IPv4", 00:21:13.088 "traddr": "10.0.0.2", 00:21:13.088 "trsvcid": "4420" 00:21:13.088 }, 00:21:13.088 "peer_address": { 00:21:13.088 "trtype": "TCP", 00:21:13.088 "adrfam": "IPv4", 00:21:13.088 "traddr": "10.0.0.1", 00:21:13.088 "trsvcid": "59026" 00:21:13.088 }, 00:21:13.088 "auth": { 00:21:13.088 "state": "completed", 00:21:13.088 "digest": "sha384", 00:21:13.088 "dhgroup": "ffdhe6144" 00:21:13.088 } 00:21:13.088 } 00:21:13.088 ]' 00:21:13.088 22:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:13.088 22:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:13.088 22:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:13.346 22:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:13.346 22:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:13.346 22:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.346 22:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.346 22:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.603 22:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGY4ZTc0MDc5Y2NjMmQ5ZTg3YWYxNWM0YTU5YmU4MzUxMTljMDFhYjhmNTg0N2VlQx4FVg==: --dhchap-ctrl-secret DHHC-1:01:NGI3ZTEwNzI2N2VjNWIwYWRiNGQ0OWU4Yzk5OTM0NDSBP0yX: 00:21:14.536 22:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.536 22:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:14.536 22:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.536 22:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.536 22:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.536 22:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:14.536 22:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:14.536 22:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:14.795 22:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:21:14.795 22:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:14.795 22:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:14.795 22:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:14.795 22:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:14.795 22:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.795 22:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:14.795 22:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.795 22:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.795 22:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.795 22:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:14.795 22:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:15.361 00:21:15.361 22:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:15.361 22:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:15.361 22:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.619 22:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.619 22:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.619 22:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.619 22:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.619 22:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.619 22:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:15.619 { 00:21:15.619 "cntlid": 87, 00:21:15.619 "qid": 0, 00:21:15.619 "state": "enabled", 00:21:15.619 "listen_address": { 00:21:15.619 "trtype": "TCP", 00:21:15.619 "adrfam": "IPv4", 00:21:15.619 "traddr": "10.0.0.2", 00:21:15.619 "trsvcid": "4420" 00:21:15.619 }, 00:21:15.619 "peer_address": { 00:21:15.619 "trtype": "TCP", 00:21:15.619 "adrfam": "IPv4", 00:21:15.619 "traddr": "10.0.0.1", 00:21:15.619 "trsvcid": "59054" 00:21:15.619 }, 00:21:15.619 "auth": { 00:21:15.619 "state": "completed", 00:21:15.619 "digest": "sha384", 00:21:15.619 "dhgroup": "ffdhe6144" 00:21:15.619 } 00:21:15.619 } 00:21:15.619 ]' 00:21:15.619 22:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:15.619 22:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:15.619 22:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:15.619 22:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:15.619 22:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:15.619 22:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.619 22:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.619 22:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.877 22:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmMyMzIxM2NkMmEwMzkxYzEyMGFlOGIyMzdkOGUwODI2MWY0MWJhNWE2ZjlkYjU5ZTc0Nzg0NDUwNDU5YjZlYSl7zGw=: 00:21:16.810 22:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.810 22:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:16.810 22:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.810 22:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.810 22:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.810 22:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:16.810 22:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:16.810 22:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:16.810 22:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:17.068 22:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:21:17.068 22:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:17.068 22:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:17.068 22:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:17.068 22:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:17.068 22:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.068 22:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.068 22:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.068 22:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.068 22:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.068 22:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.068 22:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.001 00:21:18.001 22:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:18.001 22:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:18.001 22:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.259 22:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.259 22:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.259 22:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.259 22:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.259 22:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.259 22:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:18.259 { 00:21:18.259 "cntlid": 89, 00:21:18.259 "qid": 0, 00:21:18.259 "state": "enabled", 00:21:18.259 "listen_address": { 00:21:18.259 "trtype": "TCP", 00:21:18.259 "adrfam": "IPv4", 00:21:18.259 "traddr": "10.0.0.2", 00:21:18.259 "trsvcid": "4420" 00:21:18.259 }, 00:21:18.259 "peer_address": { 00:21:18.259 "trtype": "TCP", 00:21:18.259 "adrfam": "IPv4", 00:21:18.259 "traddr": "10.0.0.1", 00:21:18.259 "trsvcid": "45372" 00:21:18.259 }, 00:21:18.259 "auth": { 00:21:18.259 "state": "completed", 00:21:18.259 "digest": "sha384", 00:21:18.259 "dhgroup": "ffdhe8192" 00:21:18.259 } 00:21:18.259 } 00:21:18.259 ]' 00:21:18.259 22:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:18.259 22:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:18.259 22:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:18.517 22:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:18.517 22:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:18.517 22:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.517 22:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.517 22:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.806 22:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjJkMTEyYTQwNjkxNGQ5YzQwODVlNGMzZGI3ZDk3ZDQ5YmQxZDM2OWJlNjM3ZTdjd/AfXg==: --dhchap-ctrl-secret DHHC-1:03:NzQ5YjgxNGM3N2MzODU1YjIwNTJhYTQyMmRhMTgzZmY1YTVjYjNkMGY1NzkzYmNlMGQ1MGIyMDZhZDUwNjQzN/RBk0Q=: 00:21:19.738 22:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.738 22:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.738 22:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.738 22:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.738 22:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.738 22:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:19.738 22:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:19.738 22:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:19.995 22:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:21:19.995 22:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:19.995 22:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:19.995 22:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:19.995 22:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:19.995 22:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.996 22:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.996 22:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.996 22:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.996 22:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.996 22:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.996 22:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.927 00:21:20.927 22:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:20.927 22:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:20.927 22:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.927 22:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.927 22:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.927 22:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.927 22:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.927 22:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.927 22:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:20.927 { 00:21:20.927 "cntlid": 91, 00:21:20.927 "qid": 0, 00:21:20.927 "state": "enabled", 00:21:20.927 "listen_address": { 00:21:20.927 "trtype": "TCP", 00:21:20.927 "adrfam": "IPv4", 00:21:20.927 "traddr": "10.0.0.2", 00:21:20.927 "trsvcid": "4420" 00:21:20.927 }, 00:21:20.927 "peer_address": { 00:21:20.927 "trtype": "TCP", 00:21:20.927 "adrfam": "IPv4", 00:21:20.927 "traddr": "10.0.0.1", 00:21:20.927 "trsvcid": "45400" 00:21:20.927 }, 00:21:20.927 "auth": { 00:21:20.927 "state": "completed", 00:21:20.927 "digest": "sha384", 00:21:20.927 "dhgroup": "ffdhe8192" 00:21:20.927 } 00:21:20.927 } 00:21:20.927 ]' 00:21:20.927 22:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:20.927 22:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:20.927 22:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:20.927 22:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:20.927 22:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:21.184 22:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.184 22:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.184 22:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.440 22:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZGU4ZmY1YTQ1NjFhYTVkNmI5NWI1YjY0YzEyOTM3Y2N3iUfL: --dhchap-ctrl-secret DHHC-1:02:NmFhZDA5YzJiYTFmNTFiOGIyMjFkZmMzN2UzYzAxODE0YWQxYTZkMzEwYjc2ODJh8yuKtw==: 00:21:22.372 22:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.372 22:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:22.372 22:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.372 22:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.372 22:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.372 22:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:22.372 22:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:22.372 22:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:22.628 22:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:21:22.628 22:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:22.628 22:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:22.628 22:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:22.628 22:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:22.628 22:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.628 22:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.628 22:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.628 22:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.628 22:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.628 22:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.628 22:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.561 00:21:23.561 22:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:23.561 22:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:23.561 22:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.819 22:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.819 22:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.819 22:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.819 22:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.819 22:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.819 22:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:23.819 { 00:21:23.819 "cntlid": 93, 00:21:23.819 "qid": 0, 00:21:23.819 "state": "enabled", 00:21:23.819 "listen_address": { 00:21:23.819 "trtype": "TCP", 00:21:23.819 "adrfam": "IPv4", 00:21:23.819 "traddr": "10.0.0.2", 00:21:23.819 "trsvcid": "4420" 00:21:23.819 }, 00:21:23.819 "peer_address": { 00:21:23.819 "trtype": "TCP", 00:21:23.819 "adrfam": "IPv4", 00:21:23.819 "traddr": "10.0.0.1", 00:21:23.819 "trsvcid": "45422" 00:21:23.819 }, 00:21:23.819 "auth": { 00:21:23.819 "state": "completed", 00:21:23.819 "digest": "sha384", 00:21:23.819 "dhgroup": "ffdhe8192" 00:21:23.819 } 00:21:23.819 } 00:21:23.819 ]' 00:21:23.819 22:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:23.819 22:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:23.819 22:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:23.819 22:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:23.819 22:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:23.819 22:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.819 22:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.819 22:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.076 22:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGY4ZTc0MDc5Y2NjMmQ5ZTg3YWYxNWM0YTU5YmU4MzUxMTljMDFhYjhmNTg0N2VlQx4FVg==: --dhchap-ctrl-secret DHHC-1:01:NGI3ZTEwNzI2N2VjNWIwYWRiNGQ0OWU4Yzk5OTM0NDSBP0yX: 00:21:25.449 22:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.449 22:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.449 22:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.449 22:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.449 22:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.449 22:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:25.449 22:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:25.449 22:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:25.449 22:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:21:25.449 22:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:25.449 22:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:25.449 22:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:25.449 22:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:25.449 22:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.449 22:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:25.449 22:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.449 22:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.449 22:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.449 22:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:25.449 22:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:26.383 00:21:26.383 22:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:26.383 22:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:26.383 22:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.640 22:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.640 22:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.640 22:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.640 22:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.640 22:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.640 22:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:26.640 { 00:21:26.640 "cntlid": 95, 00:21:26.640 "qid": 0, 00:21:26.640 "state": "enabled", 00:21:26.640 "listen_address": { 00:21:26.640 "trtype": "TCP", 00:21:26.640 "adrfam": "IPv4", 00:21:26.640 "traddr": "10.0.0.2", 00:21:26.640 "trsvcid": "4420" 00:21:26.640 }, 00:21:26.640 "peer_address": { 00:21:26.640 "trtype": "TCP", 00:21:26.640 "adrfam": "IPv4", 00:21:26.640 "traddr": "10.0.0.1", 00:21:26.640 "trsvcid": "45462" 00:21:26.640 }, 00:21:26.640 "auth": { 00:21:26.640 "state": "completed", 00:21:26.640 "digest": "sha384", 00:21:26.640 "dhgroup": "ffdhe8192" 00:21:26.640 } 00:21:26.641 } 00:21:26.641 ]' 00:21:26.641 22:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:26.641 22:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:26.641 22:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:26.641 22:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:26.641 22:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:26.641 22:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.641 22:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.641 22:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.898 22:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmMyMzIxM2NkMmEwMzkxYzEyMGFlOGIyMzdkOGUwODI2MWY0MWJhNWE2ZjlkYjU5ZTc0Nzg0NDUwNDU5YjZlYSl7zGw=: 00:21:27.833 22:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.833 22:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:27.833 22:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.833 22:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.833 22:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.833 22:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:27.833 22:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:27.833 22:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:27.833 22:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:27.833 22:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:28.090 22:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:21:28.090 22:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:28.090 22:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:28.090 22:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:28.090 22:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:28.090 22:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.090 22:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.091 22:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.091 22:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.091 22:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.091 22:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.091 22:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.657 00:21:28.657 22:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:28.657 22:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:28.657 22:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.657 22:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.657 22:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.657 22:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.657 22:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.657 22:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.657 22:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:28.657 { 00:21:28.657 "cntlid": 97, 00:21:28.657 "qid": 0, 00:21:28.657 "state": "enabled", 00:21:28.657 "listen_address": { 00:21:28.657 "trtype": "TCP", 00:21:28.657 "adrfam": "IPv4", 00:21:28.657 "traddr": "10.0.0.2", 00:21:28.657 "trsvcid": "4420" 00:21:28.657 }, 00:21:28.657 "peer_address": { 00:21:28.657 "trtype": "TCP", 00:21:28.657 "adrfam": "IPv4", 00:21:28.657 "traddr": "10.0.0.1", 00:21:28.657 "trsvcid": "59170" 00:21:28.657 }, 00:21:28.657 "auth": { 00:21:28.657 "state": "completed", 00:21:28.657 "digest": "sha512", 00:21:28.657 "dhgroup": "null" 00:21:28.657 } 00:21:28.657 } 00:21:28.657 ]' 00:21:28.657 22:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:28.915 22:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:28.915 22:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:28.915 22:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:28.915 22:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:28.915 22:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.915 22:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.915 22:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.173 22:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjJkMTEyYTQwNjkxNGQ5YzQwODVlNGMzZGI3ZDk3ZDQ5YmQxZDM2OWJlNjM3ZTdjd/AfXg==: --dhchap-ctrl-secret DHHC-1:03:NzQ5YjgxNGM3N2MzODU1YjIwNTJhYTQyMmRhMTgzZmY1YTVjYjNkMGY1NzkzYmNlMGQ1MGIyMDZhZDUwNjQzN/RBk0Q=: 00:21:30.107 22:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.107 22:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.107 22:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.107 22:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.107 22:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.107 22:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:30.107 22:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:30.107 22:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:30.365 22:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:21:30.365 22:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:30.365 22:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:30.365 22:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:30.365 22:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:30.365 22:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.365 22:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.365 22:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.365 22:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.365 22:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.365 22:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.366 22:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.930 00:21:30.930 22:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:30.930 22:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:30.930 22:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.930 22:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.930 22:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.930 22:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.930 22:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.930 22:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.930 22:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:30.930 { 00:21:30.931 "cntlid": 99, 00:21:30.931 "qid": 0, 00:21:30.931 "state": "enabled", 00:21:30.931 "listen_address": { 00:21:30.931 "trtype": "TCP", 00:21:30.931 "adrfam": "IPv4", 00:21:30.931 "traddr": "10.0.0.2", 00:21:30.931 "trsvcid": "4420" 00:21:30.931 }, 00:21:30.931 "peer_address": { 00:21:30.931 "trtype": "TCP", 00:21:30.931 "adrfam": "IPv4", 00:21:30.931 "traddr": "10.0.0.1", 00:21:30.931 "trsvcid": "59202" 00:21:30.931 }, 00:21:30.931 "auth": { 00:21:30.931 "state": "completed", 00:21:30.931 "digest": "sha512", 00:21:30.931 "dhgroup": "null" 00:21:30.931 } 00:21:30.931 } 00:21:30.931 ]' 00:21:30.931 22:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:31.187 22:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.187 22:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:31.187 22:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:31.187 22:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:31.187 22:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.187 22:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.188 22:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.445 22:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZGU4ZmY1YTQ1NjFhYTVkNmI5NWI1YjY0YzEyOTM3Y2N3iUfL: --dhchap-ctrl-secret DHHC-1:02:NmFhZDA5YzJiYTFmNTFiOGIyMjFkZmMzN2UzYzAxODE0YWQxYTZkMzEwYjc2ODJh8yuKtw==: 00:21:32.378 22:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.378 22:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:32.378 22:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.378 22:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.378 22:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.378 22:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:32.378 22:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:32.378 22:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:32.637 22:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:21:32.637 22:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:32.637 22:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:32.637 22:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:32.637 22:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:32.637 22:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.637 22:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.637 22:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.637 22:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.637 22:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.637 22:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.637 22:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.202 00:21:33.202 22:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:33.202 22:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:33.202 22:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.202 22:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.202 22:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.202 22:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.202 22:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.202 22:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.202 22:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:33.202 { 00:21:33.202 "cntlid": 101, 00:21:33.202 "qid": 0, 00:21:33.202 "state": "enabled", 00:21:33.202 "listen_address": { 00:21:33.202 "trtype": "TCP", 00:21:33.202 "adrfam": "IPv4", 00:21:33.202 "traddr": "10.0.0.2", 00:21:33.202 "trsvcid": "4420" 00:21:33.202 }, 00:21:33.202 "peer_address": { 00:21:33.202 "trtype": "TCP", 00:21:33.202 "adrfam": "IPv4", 00:21:33.202 "traddr": "10.0.0.1", 00:21:33.202 "trsvcid": "59224" 00:21:33.202 }, 00:21:33.202 "auth": { 00:21:33.202 "state": "completed", 00:21:33.202 "digest": "sha512", 00:21:33.202 "dhgroup": "null" 00:21:33.202 } 00:21:33.202 } 00:21:33.202 ]' 00:21:33.461 22:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:33.461 22:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.461 22:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:33.461 22:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:33.461 22:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:33.461 22:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.461 22:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.461 22:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.720 22:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGY4ZTc0MDc5Y2NjMmQ5ZTg3YWYxNWM0YTU5YmU4MzUxMTljMDFhYjhmNTg0N2VlQx4FVg==: --dhchap-ctrl-secret DHHC-1:01:NGI3ZTEwNzI2N2VjNWIwYWRiNGQ0OWU4Yzk5OTM0NDSBP0yX: 00:21:34.688 22:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.688 22:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:34.688 22:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.688 22:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.688 22:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.688 22:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:34.688 22:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:34.688 22:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:34.946 22:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:21:34.946 22:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:34.946 22:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:34.946 22:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:34.946 22:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:34.946 22:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.946 22:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:34.946 22:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.946 22:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.946 22:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.946 22:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:34.946 22:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:35.204 00:21:35.462 22:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:35.462 22:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:35.462 22:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.721 22:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.721 22:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.721 22:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.721 22:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.721 22:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.721 22:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:35.721 { 00:21:35.721 "cntlid": 103, 00:21:35.721 "qid": 0, 00:21:35.721 "state": "enabled", 00:21:35.721 "listen_address": { 00:21:35.721 "trtype": "TCP", 00:21:35.721 "adrfam": "IPv4", 00:21:35.721 "traddr": "10.0.0.2", 00:21:35.721 "trsvcid": "4420" 00:21:35.721 }, 00:21:35.721 "peer_address": { 00:21:35.721 "trtype": "TCP", 00:21:35.721 "adrfam": "IPv4", 00:21:35.721 "traddr": "10.0.0.1", 00:21:35.721 "trsvcid": "59256" 00:21:35.721 }, 00:21:35.721 "auth": { 00:21:35.721 "state": "completed", 00:21:35.721 "digest": "sha512", 00:21:35.721 "dhgroup": "null" 00:21:35.721 } 00:21:35.721 } 00:21:35.721 ]' 00:21:35.721 22:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:35.721 22:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:35.721 22:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:35.721 22:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:35.721 22:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:35.721 22:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.721 22:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.721 22:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.979 22:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmMyMzIxM2NkMmEwMzkxYzEyMGFlOGIyMzdkOGUwODI2MWY0MWJhNWE2ZjlkYjU5ZTc0Nzg0NDUwNDU5YjZlYSl7zGw=: 00:21:36.911 22:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.911 22:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:36.911 22:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.911 22:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.911 22:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.911 22:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:36.911 22:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:36.911 22:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:36.912 22:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:37.169 22:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:21:37.169 22:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:37.169 22:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:37.169 22:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:37.169 22:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:37.169 22:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.169 22:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.169 22:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.169 22:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.169 22:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.169 22:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.169 22:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.734 00:21:37.734 22:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:37.734 22:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.734 22:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:37.734 22:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.734 22:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.734 22:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.734 22:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.734 22:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.734 22:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:37.734 { 00:21:37.734 "cntlid": 105, 00:21:37.734 "qid": 0, 00:21:37.734 "state": "enabled", 00:21:37.734 "listen_address": { 00:21:37.734 "trtype": "TCP", 00:21:37.734 "adrfam": "IPv4", 00:21:37.734 "traddr": "10.0.0.2", 00:21:37.734 "trsvcid": "4420" 00:21:37.734 }, 00:21:37.734 "peer_address": { 00:21:37.734 "trtype": "TCP", 00:21:37.734 "adrfam": "IPv4", 00:21:37.734 "traddr": "10.0.0.1", 00:21:37.734 "trsvcid": "59030" 00:21:37.734 }, 00:21:37.734 "auth": { 00:21:37.734 "state": "completed", 00:21:37.734 "digest": "sha512", 00:21:37.734 "dhgroup": "ffdhe2048" 00:21:37.734 } 00:21:37.734 } 00:21:37.734 ]' 00:21:37.734 22:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:37.991 22:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.991 22:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:37.991 22:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:37.991 22:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:37.991 22:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.991 22:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.991 22:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.247 22:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjJkMTEyYTQwNjkxNGQ5YzQwODVlNGMzZGI3ZDk3ZDQ5YmQxZDM2OWJlNjM3ZTdjd/AfXg==: --dhchap-ctrl-secret DHHC-1:03:NzQ5YjgxNGM3N2MzODU1YjIwNTJhYTQyMmRhMTgzZmY1YTVjYjNkMGY1NzkzYmNlMGQ1MGIyMDZhZDUwNjQzN/RBk0Q=: 00:21:39.180 22:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.180 22:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.180 22:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.180 22:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.180 22:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.180 22:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:39.180 22:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:39.180 22:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:39.437 22:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:21:39.437 22:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:39.437 22:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:39.437 22:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:39.437 22:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:39.437 22:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.437 22:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.437 22:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.437 22:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.437 22:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.437 22:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.437 22:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.694 00:21:39.694 22:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:39.694 22:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:39.694 22:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.952 22:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.952 22:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.952 22:51:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.952 22:51:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.952 22:51:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.952 22:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:39.952 { 00:21:39.952 "cntlid": 107, 00:21:39.952 "qid": 0, 00:21:39.952 "state": "enabled", 00:21:39.952 "listen_address": { 00:21:39.952 "trtype": "TCP", 00:21:39.952 "adrfam": "IPv4", 00:21:39.952 "traddr": "10.0.0.2", 00:21:39.952 "trsvcid": "4420" 00:21:39.952 }, 00:21:39.952 "peer_address": { 00:21:39.952 "trtype": "TCP", 00:21:39.952 "adrfam": "IPv4", 00:21:39.952 "traddr": "10.0.0.1", 00:21:39.952 "trsvcid": "59050" 00:21:39.952 }, 00:21:39.952 "auth": { 00:21:39.952 "state": "completed", 00:21:39.952 "digest": "sha512", 00:21:39.952 "dhgroup": "ffdhe2048" 00:21:39.952 } 00:21:39.952 } 00:21:39.952 ]' 00:21:39.952 22:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:40.210 22:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:40.210 22:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:40.210 22:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:40.210 22:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:40.210 22:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.210 22:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.210 22:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.468 22:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZGU4ZmY1YTQ1NjFhYTVkNmI5NWI1YjY0YzEyOTM3Y2N3iUfL: --dhchap-ctrl-secret DHHC-1:02:NmFhZDA5YzJiYTFmNTFiOGIyMjFkZmMzN2UzYzAxODE0YWQxYTZkMzEwYjc2ODJh8yuKtw==: 00:21:41.399 22:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.399 22:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:41.399 22:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.399 22:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.399 22:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.399 22:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:41.399 22:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:41.399 22:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:41.656 22:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:21:41.656 22:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:41.656 22:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:41.656 22:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:41.656 22:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:41.656 22:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.656 22:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.656 22:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.656 22:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.656 22:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.656 22:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.656 22:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.913 00:21:41.913 22:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:41.913 22:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:41.913 22:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.170 22:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.170 22:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.170 22:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.170 22:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.170 22:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.170 22:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:42.170 { 00:21:42.170 "cntlid": 109, 00:21:42.170 "qid": 0, 00:21:42.170 "state": "enabled", 00:21:42.170 "listen_address": { 00:21:42.170 "trtype": "TCP", 00:21:42.170 "adrfam": "IPv4", 00:21:42.170 "traddr": "10.0.0.2", 00:21:42.170 "trsvcid": "4420" 00:21:42.170 }, 00:21:42.170 "peer_address": { 00:21:42.170 "trtype": "TCP", 00:21:42.170 "adrfam": "IPv4", 00:21:42.170 "traddr": "10.0.0.1", 00:21:42.170 "trsvcid": "59076" 00:21:42.170 }, 00:21:42.170 "auth": { 00:21:42.170 "state": "completed", 00:21:42.170 "digest": "sha512", 00:21:42.170 "dhgroup": "ffdhe2048" 00:21:42.170 } 00:21:42.170 } 00:21:42.170 ]' 00:21:42.170 22:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:42.170 22:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.170 22:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:42.170 22:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:42.170 22:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:42.429 22:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.429 22:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.429 22:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.687 22:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGY4ZTc0MDc5Y2NjMmQ5ZTg3YWYxNWM0YTU5YmU4MzUxMTljMDFhYjhmNTg0N2VlQx4FVg==: --dhchap-ctrl-secret DHHC-1:01:NGI3ZTEwNzI2N2VjNWIwYWRiNGQ0OWU4Yzk5OTM0NDSBP0yX: 00:21:43.622 22:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.622 22:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.622 22:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.622 22:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.622 22:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.622 22:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:43.622 22:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:43.622 22:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:43.880 22:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:43.880 22:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:43.880 22:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:43.880 22:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:43.880 22:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:43.880 22:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.880 22:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:43.880 22:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.880 22:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.880 22:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.880 22:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:43.880 22:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:44.138 00:21:44.138 22:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:44.138 22:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:44.138 22:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.396 22:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.396 22:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.396 22:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.396 22:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.396 22:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.396 22:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:44.396 { 00:21:44.396 "cntlid": 111, 00:21:44.396 "qid": 0, 00:21:44.396 "state": "enabled", 00:21:44.396 "listen_address": { 00:21:44.396 "trtype": "TCP", 00:21:44.396 "adrfam": "IPv4", 00:21:44.396 "traddr": "10.0.0.2", 00:21:44.396 "trsvcid": "4420" 00:21:44.396 }, 00:21:44.396 "peer_address": { 00:21:44.396 "trtype": "TCP", 00:21:44.396 "adrfam": "IPv4", 00:21:44.396 "traddr": "10.0.0.1", 00:21:44.396 "trsvcid": "59096" 00:21:44.396 }, 00:21:44.396 "auth": { 00:21:44.396 "state": "completed", 00:21:44.396 "digest": "sha512", 00:21:44.396 "dhgroup": "ffdhe2048" 00:21:44.396 } 00:21:44.396 } 00:21:44.396 ]' 00:21:44.396 22:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:44.396 22:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.396 22:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:44.396 22:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:44.396 22:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:44.396 22:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.396 22:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.396 22:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.654 22:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmMyMzIxM2NkMmEwMzkxYzEyMGFlOGIyMzdkOGUwODI2MWY0MWJhNWE2ZjlkYjU5ZTc0Nzg0NDUwNDU5YjZlYSl7zGw=: 00:21:45.586 22:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.586 22:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.586 22:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.586 22:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.586 22:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.586 22:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:45.586 22:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:45.586 22:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:45.586 22:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:45.844 22:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:45.844 22:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:45.844 22:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:45.844 22:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:45.844 22:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:45.844 22:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.844 22:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.844 22:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.844 22:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.844 22:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.844 22:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.844 22:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.410 00:21:46.410 22:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:46.410 22:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.410 22:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:46.668 22:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.668 22:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.668 22:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.668 22:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.668 22:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.668 22:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:46.668 { 00:21:46.668 "cntlid": 113, 00:21:46.668 "qid": 0, 00:21:46.668 "state": "enabled", 00:21:46.668 "listen_address": { 00:21:46.668 "trtype": "TCP", 00:21:46.668 "adrfam": "IPv4", 00:21:46.668 "traddr": "10.0.0.2", 00:21:46.668 "trsvcid": "4420" 00:21:46.668 }, 00:21:46.668 "peer_address": { 00:21:46.668 "trtype": "TCP", 00:21:46.668 "adrfam": "IPv4", 00:21:46.668 "traddr": "10.0.0.1", 00:21:46.668 "trsvcid": "59112" 00:21:46.668 }, 00:21:46.668 "auth": { 00:21:46.668 "state": "completed", 00:21:46.668 "digest": "sha512", 00:21:46.668 "dhgroup": "ffdhe3072" 00:21:46.668 } 00:21:46.668 } 00:21:46.668 ]' 00:21:46.668 22:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:46.668 22:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:46.668 22:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:46.668 22:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:46.668 22:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:46.668 22:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.668 22:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.668 22:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.926 22:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjJkMTEyYTQwNjkxNGQ5YzQwODVlNGMzZGI3ZDk3ZDQ5YmQxZDM2OWJlNjM3ZTdjd/AfXg==: --dhchap-ctrl-secret DHHC-1:03:NzQ5YjgxNGM3N2MzODU1YjIwNTJhYTQyMmRhMTgzZmY1YTVjYjNkMGY1NzkzYmNlMGQ1MGIyMDZhZDUwNjQzN/RBk0Q=: 00:21:47.859 22:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.859 22:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:47.859 22:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.859 22:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.859 22:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.859 22:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:47.859 22:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:47.859 22:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:48.117 22:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:48.117 22:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:48.117 22:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:48.117 22:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:48.117 22:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:48.117 22:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.117 22:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.117 22:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.117 22:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.117 22:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.117 22:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.117 22:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.682 00:21:48.682 22:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:48.682 22:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:48.682 22:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.940 22:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.940 22:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.940 22:51:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.940 22:51:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.940 22:51:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.940 22:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:48.940 { 00:21:48.940 "cntlid": 115, 00:21:48.940 "qid": 0, 00:21:48.940 "state": "enabled", 00:21:48.940 "listen_address": { 00:21:48.940 "trtype": "TCP", 00:21:48.940 "adrfam": "IPv4", 00:21:48.940 "traddr": "10.0.0.2", 00:21:48.940 "trsvcid": "4420" 00:21:48.940 }, 00:21:48.940 "peer_address": { 00:21:48.940 "trtype": "TCP", 00:21:48.940 "adrfam": "IPv4", 00:21:48.940 "traddr": "10.0.0.1", 00:21:48.940 "trsvcid": "60376" 00:21:48.940 }, 00:21:48.940 "auth": { 00:21:48.940 "state": "completed", 00:21:48.940 "digest": "sha512", 00:21:48.940 "dhgroup": "ffdhe3072" 00:21:48.940 } 00:21:48.940 } 00:21:48.940 ]' 00:21:48.940 22:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:48.940 22:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:48.940 22:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:48.940 22:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:48.940 22:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:48.940 22:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.940 22:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.940 22:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.219 22:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZGU4ZmY1YTQ1NjFhYTVkNmI5NWI1YjY0YzEyOTM3Y2N3iUfL: --dhchap-ctrl-secret DHHC-1:02:NmFhZDA5YzJiYTFmNTFiOGIyMjFkZmMzN2UzYzAxODE0YWQxYTZkMzEwYjc2ODJh8yuKtw==: 00:21:50.165 22:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.166 22:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:50.166 22:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.166 22:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.166 22:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.166 22:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:50.166 22:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:50.166 22:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:50.423 22:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:50.423 22:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:50.423 22:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:50.423 22:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:50.423 22:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:50.423 22:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.423 22:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.423 22:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.423 22:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.423 22:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.424 22:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.424 22:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.989 00:21:50.989 22:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:50.989 22:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:50.989 22:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.989 22:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.989 22:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.989 22:51:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.989 22:51:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.989 22:51:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.989 22:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:50.989 { 00:21:50.989 "cntlid": 117, 00:21:50.989 "qid": 0, 00:21:50.989 "state": "enabled", 00:21:50.989 "listen_address": { 00:21:50.989 "trtype": "TCP", 00:21:50.989 "adrfam": "IPv4", 00:21:50.989 "traddr": "10.0.0.2", 00:21:50.989 "trsvcid": "4420" 00:21:50.989 }, 00:21:50.989 "peer_address": { 00:21:50.989 "trtype": "TCP", 00:21:50.989 "adrfam": "IPv4", 00:21:50.989 "traddr": "10.0.0.1", 00:21:50.989 "trsvcid": "60416" 00:21:50.989 }, 00:21:50.989 "auth": { 00:21:50.989 "state": "completed", 00:21:50.989 "digest": "sha512", 00:21:50.989 "dhgroup": "ffdhe3072" 00:21:50.989 } 00:21:50.989 } 00:21:50.989 ]' 00:21:50.989 22:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:51.247 22:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:51.247 22:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:51.247 22:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:51.247 22:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:51.247 22:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.247 22:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.247 22:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.504 22:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGY4ZTc0MDc5Y2NjMmQ5ZTg3YWYxNWM0YTU5YmU4MzUxMTljMDFhYjhmNTg0N2VlQx4FVg==: --dhchap-ctrl-secret DHHC-1:01:NGI3ZTEwNzI2N2VjNWIwYWRiNGQ0OWU4Yzk5OTM0NDSBP0yX: 00:21:52.436 22:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.436 22:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:52.436 22:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.436 22:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.436 22:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.436 22:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:52.436 22:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:52.436 22:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:52.693 22:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:21:52.693 22:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:52.693 22:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:52.693 22:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:52.693 22:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:52.693 22:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.693 22:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:52.693 22:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.693 22:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.693 22:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.693 22:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:52.693 22:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:52.950 00:21:53.207 22:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:53.207 22:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:53.207 22:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.465 22:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.465 22:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.465 22:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.465 22:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.465 22:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.465 22:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:53.465 { 00:21:53.465 "cntlid": 119, 00:21:53.465 "qid": 0, 00:21:53.465 "state": "enabled", 00:21:53.465 "listen_address": { 00:21:53.465 "trtype": "TCP", 00:21:53.465 "adrfam": "IPv4", 00:21:53.465 "traddr": "10.0.0.2", 00:21:53.465 "trsvcid": "4420" 00:21:53.465 }, 00:21:53.465 "peer_address": { 00:21:53.465 "trtype": "TCP", 00:21:53.465 "adrfam": "IPv4", 00:21:53.465 "traddr": "10.0.0.1", 00:21:53.465 "trsvcid": "60442" 00:21:53.465 }, 00:21:53.465 "auth": { 00:21:53.465 "state": "completed", 00:21:53.465 "digest": "sha512", 00:21:53.465 "dhgroup": "ffdhe3072" 00:21:53.465 } 00:21:53.465 } 00:21:53.465 ]' 00:21:53.465 22:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:53.465 22:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:53.465 22:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:53.465 22:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:53.465 22:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:53.465 22:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.465 22:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.465 22:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.722 22:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmMyMzIxM2NkMmEwMzkxYzEyMGFlOGIyMzdkOGUwODI2MWY0MWJhNWE2ZjlkYjU5ZTc0Nzg0NDUwNDU5YjZlYSl7zGw=: 00:21:54.654 22:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.654 22:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.654 22:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.654 22:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.654 22:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.654 22:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:54.654 22:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:54.654 22:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:54.654 22:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:54.911 22:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:54.911 22:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:54.911 22:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:54.911 22:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:54.911 22:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:54.911 22:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.911 22:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.911 22:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.911 22:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.911 22:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.911 22:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.911 22:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.475 00:21:55.475 22:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:55.475 22:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:55.475 22:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.733 22:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.733 22:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.733 22:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.733 22:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.733 22:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.733 22:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:55.733 { 00:21:55.733 "cntlid": 121, 00:21:55.733 "qid": 0, 00:21:55.733 "state": "enabled", 00:21:55.733 "listen_address": { 00:21:55.733 "trtype": "TCP", 00:21:55.733 "adrfam": "IPv4", 00:21:55.733 "traddr": "10.0.0.2", 00:21:55.733 "trsvcid": "4420" 00:21:55.733 }, 00:21:55.733 "peer_address": { 00:21:55.733 "trtype": "TCP", 00:21:55.733 "adrfam": "IPv4", 00:21:55.733 "traddr": "10.0.0.1", 00:21:55.733 "trsvcid": "60476" 00:21:55.733 }, 00:21:55.733 "auth": { 00:21:55.733 "state": "completed", 00:21:55.733 "digest": "sha512", 00:21:55.733 "dhgroup": "ffdhe4096" 00:21:55.733 } 00:21:55.733 } 00:21:55.733 ]' 00:21:55.733 22:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:55.733 22:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.733 22:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:55.733 22:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:55.733 22:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:55.733 22:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.733 22:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.733 22:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.990 22:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjJkMTEyYTQwNjkxNGQ5YzQwODVlNGMzZGI3ZDk3ZDQ5YmQxZDM2OWJlNjM3ZTdjd/AfXg==: --dhchap-ctrl-secret DHHC-1:03:NzQ5YjgxNGM3N2MzODU1YjIwNTJhYTQyMmRhMTgzZmY1YTVjYjNkMGY1NzkzYmNlMGQ1MGIyMDZhZDUwNjQzN/RBk0Q=: 00:21:56.922 22:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.922 22:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.922 22:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.922 22:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.922 22:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.922 22:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:56.922 22:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:56.922 22:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:57.180 22:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:21:57.180 22:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:57.180 22:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:57.180 22:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:57.180 22:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:57.180 22:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.180 22:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.180 22:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.180 22:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.180 22:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.180 22:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.180 22:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.745 00:21:57.745 22:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:57.745 22:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:57.745 22:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.004 22:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.004 22:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.004 22:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.004 22:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.004 22:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.004 22:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:58.004 { 00:21:58.004 "cntlid": 123, 00:21:58.004 "qid": 0, 00:21:58.004 "state": "enabled", 00:21:58.004 "listen_address": { 00:21:58.004 "trtype": "TCP", 00:21:58.004 "adrfam": "IPv4", 00:21:58.004 "traddr": "10.0.0.2", 00:21:58.004 "trsvcid": "4420" 00:21:58.004 }, 00:21:58.004 "peer_address": { 00:21:58.004 "trtype": "TCP", 00:21:58.004 "adrfam": "IPv4", 00:21:58.004 "traddr": "10.0.0.1", 00:21:58.004 "trsvcid": "47222" 00:21:58.004 }, 00:21:58.004 "auth": { 00:21:58.004 "state": "completed", 00:21:58.004 "digest": "sha512", 00:21:58.004 "dhgroup": "ffdhe4096" 00:21:58.004 } 00:21:58.004 } 00:21:58.004 ]' 00:21:58.004 22:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:58.004 22:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.004 22:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:58.004 22:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:58.004 22:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:58.004 22:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.004 22:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.004 22:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.262 22:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZGU4ZmY1YTQ1NjFhYTVkNmI5NWI1YjY0YzEyOTM3Y2N3iUfL: --dhchap-ctrl-secret DHHC-1:02:NmFhZDA5YzJiYTFmNTFiOGIyMjFkZmMzN2UzYzAxODE0YWQxYTZkMzEwYjc2ODJh8yuKtw==: 00:21:59.633 22:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.633 22:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:59.633 22:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.633 22:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.633 22:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.633 22:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:59.633 22:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:59.633 22:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:59.633 22:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:59.633 22:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:59.633 22:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:59.633 22:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:59.633 22:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:59.633 22:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.633 22:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.633 22:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.633 22:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.633 22:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.633 22:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.633 22:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.890 00:22:00.148 22:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:00.148 22:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:00.148 22:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.148 22:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.148 22:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.148 22:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.148 22:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.148 22:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.148 22:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:00.148 { 00:22:00.148 "cntlid": 125, 00:22:00.148 "qid": 0, 00:22:00.148 "state": "enabled", 00:22:00.148 "listen_address": { 00:22:00.148 "trtype": "TCP", 00:22:00.148 "adrfam": "IPv4", 00:22:00.148 "traddr": "10.0.0.2", 00:22:00.148 "trsvcid": "4420" 00:22:00.148 }, 00:22:00.148 "peer_address": { 00:22:00.148 "trtype": "TCP", 00:22:00.148 "adrfam": "IPv4", 00:22:00.148 "traddr": "10.0.0.1", 00:22:00.148 "trsvcid": "47254" 00:22:00.148 }, 00:22:00.148 "auth": { 00:22:00.148 "state": "completed", 00:22:00.148 "digest": "sha512", 00:22:00.148 "dhgroup": "ffdhe4096" 00:22:00.148 } 00:22:00.148 } 00:22:00.148 ]' 00:22:00.148 22:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:00.405 22:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:00.405 22:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:00.405 22:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:00.405 22:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:00.405 22:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.405 22:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.405 22:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.662 22:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGY4ZTc0MDc5Y2NjMmQ5ZTg3YWYxNWM0YTU5YmU4MzUxMTljMDFhYjhmNTg0N2VlQx4FVg==: --dhchap-ctrl-secret DHHC-1:01:NGI3ZTEwNzI2N2VjNWIwYWRiNGQ0OWU4Yzk5OTM0NDSBP0yX: 00:22:01.592 22:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.592 22:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:01.592 22:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.592 22:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.592 22:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.592 22:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:01.592 22:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:01.592 22:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:01.849 22:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:22:01.849 22:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:01.849 22:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:01.849 22:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:01.849 22:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:01.849 22:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.849 22:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:01.849 22:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.849 22:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.849 22:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.849 22:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:01.849 22:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:02.415 00:22:02.415 22:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:02.415 22:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:02.415 22:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.415 22:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.415 22:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.415 22:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.415 22:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.415 22:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.415 22:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:02.415 { 00:22:02.415 "cntlid": 127, 00:22:02.415 "qid": 0, 00:22:02.415 "state": "enabled", 00:22:02.415 "listen_address": { 00:22:02.415 "trtype": "TCP", 00:22:02.415 "adrfam": "IPv4", 00:22:02.415 "traddr": "10.0.0.2", 00:22:02.415 "trsvcid": "4420" 00:22:02.415 }, 00:22:02.415 "peer_address": { 00:22:02.415 "trtype": "TCP", 00:22:02.415 "adrfam": "IPv4", 00:22:02.415 "traddr": "10.0.0.1", 00:22:02.415 "trsvcid": "47282" 00:22:02.415 }, 00:22:02.415 "auth": { 00:22:02.415 "state": "completed", 00:22:02.415 "digest": "sha512", 00:22:02.415 "dhgroup": "ffdhe4096" 00:22:02.415 } 00:22:02.415 } 00:22:02.415 ]' 00:22:02.415 22:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:02.672 22:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:02.672 22:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:02.672 22:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:02.672 22:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:02.672 22:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.672 22:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.672 22:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.930 22:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmMyMzIxM2NkMmEwMzkxYzEyMGFlOGIyMzdkOGUwODI2MWY0MWJhNWE2ZjlkYjU5ZTc0Nzg0NDUwNDU5YjZlYSl7zGw=: 00:22:03.862 22:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.862 22:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:03.862 22:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.862 22:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.862 22:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.862 22:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:03.862 22:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:03.862 22:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:03.862 22:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:04.120 22:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:22:04.120 22:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:04.120 22:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:04.120 22:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:04.120 22:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:04.120 22:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.120 22:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.120 22:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.120 22:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.120 22:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.120 22:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.120 22:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.746 00:22:04.746 22:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:04.746 22:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.746 22:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:05.003 22:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.003 22:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.003 22:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.003 22:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.003 22:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.003 22:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:05.003 { 00:22:05.003 "cntlid": 129, 00:22:05.003 "qid": 0, 00:22:05.003 "state": "enabled", 00:22:05.003 "listen_address": { 00:22:05.003 "trtype": "TCP", 00:22:05.003 "adrfam": "IPv4", 00:22:05.003 "traddr": "10.0.0.2", 00:22:05.003 "trsvcid": "4420" 00:22:05.003 }, 00:22:05.003 "peer_address": { 00:22:05.003 "trtype": "TCP", 00:22:05.003 "adrfam": "IPv4", 00:22:05.003 "traddr": "10.0.0.1", 00:22:05.003 "trsvcid": "47304" 00:22:05.003 }, 00:22:05.003 "auth": { 00:22:05.003 "state": "completed", 00:22:05.003 "digest": "sha512", 00:22:05.003 "dhgroup": "ffdhe6144" 00:22:05.003 } 00:22:05.003 } 00:22:05.003 ]' 00:22:05.003 22:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:05.003 22:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:05.003 22:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:05.003 22:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:05.003 22:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:05.003 22:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.003 22:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.003 22:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.261 22:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjJkMTEyYTQwNjkxNGQ5YzQwODVlNGMzZGI3ZDk3ZDQ5YmQxZDM2OWJlNjM3ZTdjd/AfXg==: --dhchap-ctrl-secret DHHC-1:03:NzQ5YjgxNGM3N2MzODU1YjIwNTJhYTQyMmRhMTgzZmY1YTVjYjNkMGY1NzkzYmNlMGQ1MGIyMDZhZDUwNjQzN/RBk0Q=: 00:22:06.193 22:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.193 22:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:06.193 22:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.193 22:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.193 22:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.193 22:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:06.193 22:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:06.193 22:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:06.450 22:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:22:06.450 22:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:06.450 22:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:06.450 22:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:06.450 22:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:06.450 22:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.450 22:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.450 22:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.450 22:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.450 22:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.450 22:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.450 22:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:07.015 00:22:07.015 22:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:07.015 22:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:07.015 22:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.272 22:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.272 22:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.272 22:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.272 22:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.272 22:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.272 22:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:07.272 { 00:22:07.272 "cntlid": 131, 00:22:07.272 "qid": 0, 00:22:07.272 "state": "enabled", 00:22:07.272 "listen_address": { 00:22:07.272 "trtype": "TCP", 00:22:07.272 "adrfam": "IPv4", 00:22:07.272 "traddr": "10.0.0.2", 00:22:07.272 "trsvcid": "4420" 00:22:07.272 }, 00:22:07.273 "peer_address": { 00:22:07.273 "trtype": "TCP", 00:22:07.273 "adrfam": "IPv4", 00:22:07.273 "traddr": "10.0.0.1", 00:22:07.273 "trsvcid": "47340" 00:22:07.273 }, 00:22:07.273 "auth": { 00:22:07.273 "state": "completed", 00:22:07.273 "digest": "sha512", 00:22:07.273 "dhgroup": "ffdhe6144" 00:22:07.273 } 00:22:07.273 } 00:22:07.273 ]' 00:22:07.273 22:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:07.273 22:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:07.273 22:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:07.273 22:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:07.273 22:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:07.531 22:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.531 22:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.531 22:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.789 22:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZGU4ZmY1YTQ1NjFhYTVkNmI5NWI1YjY0YzEyOTM3Y2N3iUfL: --dhchap-ctrl-secret DHHC-1:02:NmFhZDA5YzJiYTFmNTFiOGIyMjFkZmMzN2UzYzAxODE0YWQxYTZkMzEwYjc2ODJh8yuKtw==: 00:22:08.721 22:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.721 22:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:08.721 22:52:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.721 22:52:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.721 22:52:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.721 22:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:08.721 22:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:08.721 22:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:08.978 22:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:22:08.978 22:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:08.978 22:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:08.978 22:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:08.978 22:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:08.978 22:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.978 22:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:08.978 22:52:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.978 22:52:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.978 22:52:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.978 22:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:08.978 22:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:09.544 00:22:09.544 22:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:09.545 22:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:09.545 22:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.803 22:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.803 22:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.803 22:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.803 22:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.803 22:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.803 22:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:09.803 { 00:22:09.803 "cntlid": 133, 00:22:09.803 "qid": 0, 00:22:09.803 "state": "enabled", 00:22:09.803 "listen_address": { 00:22:09.803 "trtype": "TCP", 00:22:09.803 "adrfam": "IPv4", 00:22:09.803 "traddr": "10.0.0.2", 00:22:09.803 "trsvcid": "4420" 00:22:09.803 }, 00:22:09.803 "peer_address": { 00:22:09.803 "trtype": "TCP", 00:22:09.803 "adrfam": "IPv4", 00:22:09.803 "traddr": "10.0.0.1", 00:22:09.803 "trsvcid": "41114" 00:22:09.803 }, 00:22:09.803 "auth": { 00:22:09.803 "state": "completed", 00:22:09.803 "digest": "sha512", 00:22:09.803 "dhgroup": "ffdhe6144" 00:22:09.803 } 00:22:09.803 } 00:22:09.803 ]' 00:22:09.803 22:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:09.803 22:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:09.803 22:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:09.803 22:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:09.803 22:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:09.803 22:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.803 22:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.803 22:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.061 22:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGY4ZTc0MDc5Y2NjMmQ5ZTg3YWYxNWM0YTU5YmU4MzUxMTljMDFhYjhmNTg0N2VlQx4FVg==: --dhchap-ctrl-secret DHHC-1:01:NGI3ZTEwNzI2N2VjNWIwYWRiNGQ0OWU4Yzk5OTM0NDSBP0yX: 00:22:10.994 22:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.994 22:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:10.994 22:52:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.994 22:52:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.994 22:52:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.994 22:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:10.994 22:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:10.994 22:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:11.251 22:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:22:11.251 22:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:11.251 22:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:11.251 22:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:11.251 22:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:11.251 22:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.251 22:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:11.251 22:52:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.251 22:52:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.251 22:52:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.251 22:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:11.251 22:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:11.816 00:22:11.816 22:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:11.816 22:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:11.816 22:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.074 22:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.074 22:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.074 22:52:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.074 22:52:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.074 22:52:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.074 22:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:12.074 { 00:22:12.074 "cntlid": 135, 00:22:12.074 "qid": 0, 00:22:12.074 "state": "enabled", 00:22:12.074 "listen_address": { 00:22:12.074 "trtype": "TCP", 00:22:12.074 "adrfam": "IPv4", 00:22:12.074 "traddr": "10.0.0.2", 00:22:12.074 "trsvcid": "4420" 00:22:12.074 }, 00:22:12.074 "peer_address": { 00:22:12.074 "trtype": "TCP", 00:22:12.074 "adrfam": "IPv4", 00:22:12.074 "traddr": "10.0.0.1", 00:22:12.074 "trsvcid": "41132" 00:22:12.074 }, 00:22:12.074 "auth": { 00:22:12.074 "state": "completed", 00:22:12.074 "digest": "sha512", 00:22:12.074 "dhgroup": "ffdhe6144" 00:22:12.074 } 00:22:12.074 } 00:22:12.074 ]' 00:22:12.074 22:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:12.332 22:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:12.332 22:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:12.332 22:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:12.332 22:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:12.332 22:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.332 22:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.332 22:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.590 22:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmMyMzIxM2NkMmEwMzkxYzEyMGFlOGIyMzdkOGUwODI2MWY0MWJhNWE2ZjlkYjU5ZTc0Nzg0NDUwNDU5YjZlYSl7zGw=: 00:22:13.524 22:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:13.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:13.524 22:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:13.524 22:52:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.524 22:52:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.524 22:52:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.524 22:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:13.524 22:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:13.524 22:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:13.524 22:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:13.782 22:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:22:13.782 22:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:13.782 22:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:13.782 22:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:13.782 22:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:13.782 22:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:13.782 22:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:13.782 22:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.782 22:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.782 22:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.782 22:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:13.782 22:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.715 00:22:14.715 22:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:14.715 22:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:14.715 22:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.973 22:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.973 22:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.973 22:52:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.973 22:52:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.973 22:52:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.973 22:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:14.973 { 00:22:14.973 "cntlid": 137, 00:22:14.973 "qid": 0, 00:22:14.973 "state": "enabled", 00:22:14.973 "listen_address": { 00:22:14.973 "trtype": "TCP", 00:22:14.973 "adrfam": "IPv4", 00:22:14.973 "traddr": "10.0.0.2", 00:22:14.973 "trsvcid": "4420" 00:22:14.973 }, 00:22:14.973 "peer_address": { 00:22:14.973 "trtype": "TCP", 00:22:14.973 "adrfam": "IPv4", 00:22:14.973 "traddr": "10.0.0.1", 00:22:14.973 "trsvcid": "41174" 00:22:14.973 }, 00:22:14.973 "auth": { 00:22:14.973 "state": "completed", 00:22:14.973 "digest": "sha512", 00:22:14.973 "dhgroup": "ffdhe8192" 00:22:14.973 } 00:22:14.973 } 00:22:14.973 ]' 00:22:14.973 22:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:14.973 22:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:14.973 22:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:14.973 22:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:14.973 22:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:14.973 22:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:14.973 22:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.973 22:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.538 22:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjJkMTEyYTQwNjkxNGQ5YzQwODVlNGMzZGI3ZDk3ZDQ5YmQxZDM2OWJlNjM3ZTdjd/AfXg==: --dhchap-ctrl-secret DHHC-1:03:NzQ5YjgxNGM3N2MzODU1YjIwNTJhYTQyMmRhMTgzZmY1YTVjYjNkMGY1NzkzYmNlMGQ1MGIyMDZhZDUwNjQzN/RBk0Q=: 00:22:16.472 22:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.472 22:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:16.472 22:52:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.472 22:52:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.472 22:52:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.472 22:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:16.472 22:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:16.472 22:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:16.730 22:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:22:16.730 22:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:16.730 22:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:16.730 22:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:16.730 22:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:16.730 22:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.730 22:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.730 22:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.730 22:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.730 22:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.730 22:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.730 22:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.663 00:22:17.663 22:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:17.663 22:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:17.663 22:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.922 22:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.922 22:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.922 22:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.922 22:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.922 22:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.922 22:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:17.922 { 00:22:17.922 "cntlid": 139, 00:22:17.922 "qid": 0, 00:22:17.922 "state": "enabled", 00:22:17.922 "listen_address": { 00:22:17.922 "trtype": "TCP", 00:22:17.922 "adrfam": "IPv4", 00:22:17.922 "traddr": "10.0.0.2", 00:22:17.922 "trsvcid": "4420" 00:22:17.922 }, 00:22:17.922 "peer_address": { 00:22:17.922 "trtype": "TCP", 00:22:17.922 "adrfam": "IPv4", 00:22:17.922 "traddr": "10.0.0.1", 00:22:17.922 "trsvcid": "44200" 00:22:17.922 }, 00:22:17.922 "auth": { 00:22:17.922 "state": "completed", 00:22:17.922 "digest": "sha512", 00:22:17.922 "dhgroup": "ffdhe8192" 00:22:17.922 } 00:22:17.922 } 00:22:17.922 ]' 00:22:17.922 22:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:17.922 22:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:17.922 22:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:17.922 22:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:17.922 22:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:17.922 22:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.922 22:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.922 22:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.180 22:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZGU4ZmY1YTQ1NjFhYTVkNmI5NWI1YjY0YzEyOTM3Y2N3iUfL: --dhchap-ctrl-secret DHHC-1:02:NmFhZDA5YzJiYTFmNTFiOGIyMjFkZmMzN2UzYzAxODE0YWQxYTZkMzEwYjc2ODJh8yuKtw==: 00:22:19.114 22:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.114 22:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:19.114 22:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.114 22:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.114 22:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.114 22:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:19.114 22:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:19.114 22:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:19.421 22:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:22:19.421 22:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:19.421 22:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:19.421 22:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:19.421 22:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:19.421 22:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:19.421 22:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:19.421 22:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.421 22:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.421 22:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.421 22:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:19.421 22:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:20.355 00:22:20.355 22:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:20.355 22:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:20.355 22:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.613 22:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.613 22:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:20.613 22:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.613 22:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.613 22:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.613 22:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:20.613 { 00:22:20.613 "cntlid": 141, 00:22:20.613 "qid": 0, 00:22:20.613 "state": "enabled", 00:22:20.613 "listen_address": { 00:22:20.613 "trtype": "TCP", 00:22:20.613 "adrfam": "IPv4", 00:22:20.613 "traddr": "10.0.0.2", 00:22:20.613 "trsvcid": "4420" 00:22:20.613 }, 00:22:20.613 "peer_address": { 00:22:20.613 "trtype": "TCP", 00:22:20.613 "adrfam": "IPv4", 00:22:20.613 "traddr": "10.0.0.1", 00:22:20.613 "trsvcid": "44222" 00:22:20.613 }, 00:22:20.613 "auth": { 00:22:20.613 "state": "completed", 00:22:20.613 "digest": "sha512", 00:22:20.613 "dhgroup": "ffdhe8192" 00:22:20.613 } 00:22:20.613 } 00:22:20.613 ]' 00:22:20.613 22:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:20.613 22:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:20.613 22:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:20.613 22:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:20.613 22:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:20.872 22:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.872 22:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.872 22:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.132 22:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGY4ZTc0MDc5Y2NjMmQ5ZTg3YWYxNWM0YTU5YmU4MzUxMTljMDFhYjhmNTg0N2VlQx4FVg==: --dhchap-ctrl-secret DHHC-1:01:NGI3ZTEwNzI2N2VjNWIwYWRiNGQ0OWU4Yzk5OTM0NDSBP0yX: 00:22:22.063 22:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:22.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:22.063 22:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:22.063 22:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.063 22:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.063 22:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.063 22:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:22.063 22:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:22.063 22:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:22.321 22:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:22:22.321 22:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:22.321 22:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:22.321 22:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:22.321 22:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:22.321 22:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:22.321 22:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:22.321 22:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.321 22:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.321 22:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.321 22:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:22.321 22:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:23.254 00:22:23.254 22:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:23.254 22:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:23.254 22:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.511 22:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.511 22:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.511 22:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.511 22:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.511 22:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.511 22:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:23.511 { 00:22:23.511 "cntlid": 143, 00:22:23.511 "qid": 0, 00:22:23.511 "state": "enabled", 00:22:23.511 "listen_address": { 00:22:23.511 "trtype": "TCP", 00:22:23.511 "adrfam": "IPv4", 00:22:23.511 "traddr": "10.0.0.2", 00:22:23.511 "trsvcid": "4420" 00:22:23.511 }, 00:22:23.511 "peer_address": { 00:22:23.511 "trtype": "TCP", 00:22:23.511 "adrfam": "IPv4", 00:22:23.511 "traddr": "10.0.0.1", 00:22:23.511 "trsvcid": "44252" 00:22:23.511 }, 00:22:23.511 "auth": { 00:22:23.511 "state": "completed", 00:22:23.511 "digest": "sha512", 00:22:23.511 "dhgroup": "ffdhe8192" 00:22:23.511 } 00:22:23.511 } 00:22:23.511 ]' 00:22:23.511 22:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:23.511 22:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:23.511 22:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:23.512 22:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:23.512 22:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:23.512 22:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.512 22:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.512 22:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.770 22:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmMyMzIxM2NkMmEwMzkxYzEyMGFlOGIyMzdkOGUwODI2MWY0MWJhNWE2ZjlkYjU5ZTc0Nzg0NDUwNDU5YjZlYSl7zGw=: 00:22:24.702 22:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.702 22:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:24.702 22:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.702 22:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.702 22:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.702 22:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:24.702 22:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:22:24.702 22:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:24.702 22:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:24.702 22:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:24.702 22:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:24.960 22:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:22:24.960 22:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:24.960 22:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:24.960 22:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:24.960 22:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:24.960 22:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:24.960 22:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:24.960 22:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.960 22:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.960 22:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.960 22:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:24.960 22:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.892 00:22:25.892 22:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:25.892 22:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:25.892 22:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.149 22:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.149 22:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.149 22:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.149 22:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.149 22:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.149 22:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:26.149 { 00:22:26.149 "cntlid": 145, 00:22:26.149 "qid": 0, 00:22:26.149 "state": "enabled", 00:22:26.149 "listen_address": { 00:22:26.149 "trtype": "TCP", 00:22:26.149 "adrfam": "IPv4", 00:22:26.149 "traddr": "10.0.0.2", 00:22:26.149 "trsvcid": "4420" 00:22:26.149 }, 00:22:26.149 "peer_address": { 00:22:26.149 "trtype": "TCP", 00:22:26.149 "adrfam": "IPv4", 00:22:26.149 "traddr": "10.0.0.1", 00:22:26.149 "trsvcid": "44270" 00:22:26.149 }, 00:22:26.149 "auth": { 00:22:26.149 "state": "completed", 00:22:26.149 "digest": "sha512", 00:22:26.149 "dhgroup": "ffdhe8192" 00:22:26.149 } 00:22:26.149 } 00:22:26.149 ]' 00:22:26.149 22:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:26.149 22:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:26.149 22:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:26.149 22:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:26.149 22:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:26.149 22:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:26.149 22:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:26.149 22:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.408 22:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjJkMTEyYTQwNjkxNGQ5YzQwODVlNGMzZGI3ZDk3ZDQ5YmQxZDM2OWJlNjM3ZTdjd/AfXg==: --dhchap-ctrl-secret DHHC-1:03:NzQ5YjgxNGM3N2MzODU1YjIwNTJhYTQyMmRhMTgzZmY1YTVjYjNkMGY1NzkzYmNlMGQ1MGIyMDZhZDUwNjQzN/RBk0Q=: 00:22:27.339 22:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.339 22:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:27.339 22:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.339 22:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.597 22:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.597 22:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:27.597 22:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.597 22:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.597 22:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.597 22:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:27.597 22:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:27.597 22:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:27.597 22:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:27.597 22:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:27.597 22:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:27.597 22:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:27.597 22:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:27.597 22:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:28.530 request: 00:22:28.530 { 00:22:28.530 "name": "nvme0", 00:22:28.530 "trtype": "tcp", 00:22:28.530 "traddr": "10.0.0.2", 00:22:28.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:28.530 "adrfam": "ipv4", 00:22:28.530 "trsvcid": "4420", 00:22:28.530 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:28.530 "dhchap_key": "key2", 00:22:28.530 "method": "bdev_nvme_attach_controller", 00:22:28.530 "req_id": 1 00:22:28.530 } 00:22:28.530 Got JSON-RPC error response 00:22:28.530 response: 00:22:28.530 { 00:22:28.530 "code": -5, 00:22:28.530 "message": "Input/output error" 00:22:28.530 } 00:22:28.530 22:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:28.530 22:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:28.530 22:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:28.530 22:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:28.530 22:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:28.530 22:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.530 22:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.530 22:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.530 22:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.530 22:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.530 22:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.530 22:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.530 22:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:28.530 22:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:28.530 22:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:28.530 22:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:28.530 22:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:28.530 22:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:28.530 22:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:28.530 22:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:28.530 22:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:29.095 request: 00:22:29.095 { 00:22:29.095 "name": "nvme0", 00:22:29.095 "trtype": "tcp", 00:22:29.095 "traddr": "10.0.0.2", 00:22:29.095 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:29.095 "adrfam": "ipv4", 00:22:29.095 "trsvcid": "4420", 00:22:29.095 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:29.095 "dhchap_key": "key1", 00:22:29.095 "dhchap_ctrlr_key": "ckey2", 00:22:29.095 "method": "bdev_nvme_attach_controller", 00:22:29.095 "req_id": 1 00:22:29.095 } 00:22:29.095 Got JSON-RPC error response 00:22:29.095 response: 00:22:29.095 { 00:22:29.095 "code": -5, 00:22:29.095 "message": "Input/output error" 00:22:29.095 } 00:22:29.095 22:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:29.095 22:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:29.095 22:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:29.095 22:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:29.095 22:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:29.095 22:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.095 22:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.095 22:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.095 22:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:29.095 22:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.095 22:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.095 22:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.095 22:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.095 22:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:29.095 22:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.095 22:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:29.095 22:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:29.095 22:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:29.095 22:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:29.095 22:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.095 22:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.029 request: 00:22:30.029 { 00:22:30.029 "name": "nvme0", 00:22:30.029 "trtype": "tcp", 00:22:30.029 "traddr": "10.0.0.2", 00:22:30.029 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:30.029 "adrfam": "ipv4", 00:22:30.029 "trsvcid": "4420", 00:22:30.029 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:30.029 "dhchap_key": "key1", 00:22:30.029 "dhchap_ctrlr_key": "ckey1", 00:22:30.029 "method": "bdev_nvme_attach_controller", 00:22:30.029 "req_id": 1 00:22:30.029 } 00:22:30.029 Got JSON-RPC error response 00:22:30.029 response: 00:22:30.029 { 00:22:30.029 "code": -5, 00:22:30.029 "message": "Input/output error" 00:22:30.029 } 00:22:30.029 22:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:30.029 22:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:30.029 22:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:30.029 22:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:30.029 22:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:30.029 22:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.029 22:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.029 22:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.029 22:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3550286 00:22:30.029 22:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 3550286 ']' 00:22:30.029 22:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 3550286 00:22:30.029 22:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:30.029 22:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:30.029 22:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3550286 00:22:30.029 22:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:30.029 22:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:30.029 22:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3550286' 00:22:30.029 killing process with pid 3550286 00:22:30.029 22:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 3550286 00:22:30.029 22:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 3550286 00:22:30.287 22:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:30.287 22:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:30.287 22:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:30.287 22:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.287 22:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3572753 00:22:30.287 22:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:30.287 22:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3572753 00:22:30.287 22:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3572753 ']' 00:22:30.287 22:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.287 22:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:30.287 22:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.287 22:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:30.287 22:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.546 22:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:30.546 22:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:22:30.546 22:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:30.546 22:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:30.546 22:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.804 22:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:30.804 22:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:30.804 22:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3572753 00:22:30.804 22:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3572753 ']' 00:22:30.804 22:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.804 22:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:30.804 22:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.804 22:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:30.804 22:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.061 22:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:31.061 22:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:22:31.061 22:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:22:31.061 22:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.061 22:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.061 22:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.061 22:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:22:31.061 22:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:31.061 22:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:31.061 22:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:31.061 22:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:31.061 22:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:31.061 22:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:31.061 22:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.061 22:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.061 22:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.061 22:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:31.061 22:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:31.992 00:22:31.992 22:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:31.992 22:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:31.992 22:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.250 22:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.250 22:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.250 22:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.250 22:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.250 22:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.250 22:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:32.250 { 00:22:32.250 "cntlid": 1, 00:22:32.250 "qid": 0, 00:22:32.250 "state": "enabled", 00:22:32.250 "listen_address": { 00:22:32.250 "trtype": "TCP", 00:22:32.250 "adrfam": "IPv4", 00:22:32.250 "traddr": "10.0.0.2", 00:22:32.250 "trsvcid": "4420" 00:22:32.250 }, 00:22:32.250 "peer_address": { 00:22:32.250 "trtype": "TCP", 00:22:32.250 "adrfam": "IPv4", 00:22:32.250 "traddr": "10.0.0.1", 00:22:32.250 "trsvcid": "37194" 00:22:32.250 }, 00:22:32.250 "auth": { 00:22:32.250 "state": "completed", 00:22:32.250 "digest": "sha512", 00:22:32.250 "dhgroup": "ffdhe8192" 00:22:32.250 } 00:22:32.250 } 00:22:32.250 ]' 00:22:32.250 22:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:32.250 22:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:32.250 22:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:32.250 22:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:32.250 22:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:32.507 22:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.507 22:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.507 22:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.765 22:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmMyMzIxM2NkMmEwMzkxYzEyMGFlOGIyMzdkOGUwODI2MWY0MWJhNWE2ZjlkYjU5ZTc0Nzg0NDUwNDU5YjZlYSl7zGw=: 00:22:33.697 22:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:33.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:33.697 22:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:33.697 22:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.697 22:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.697 22:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.697 22:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:33.697 22:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.697 22:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.697 22:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.697 22:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:33.697 22:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:33.955 22:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:33.955 22:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:33.955 22:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:33.955 22:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:33.955 22:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:33.955 22:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:33.955 22:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:33.955 22:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:33.955 22:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:34.214 request: 00:22:34.214 { 00:22:34.214 "name": "nvme0", 00:22:34.214 "trtype": "tcp", 00:22:34.214 "traddr": "10.0.0.2", 00:22:34.214 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:34.214 "adrfam": "ipv4", 00:22:34.214 "trsvcid": "4420", 00:22:34.214 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:34.214 "dhchap_key": "key3", 00:22:34.214 "method": "bdev_nvme_attach_controller", 00:22:34.214 "req_id": 1 00:22:34.214 } 00:22:34.214 Got JSON-RPC error response 00:22:34.214 response: 00:22:34.214 { 00:22:34.214 "code": -5, 00:22:34.214 "message": "Input/output error" 00:22:34.214 } 00:22:34.214 22:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:34.214 22:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:34.214 22:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:34.214 22:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:34.214 22:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:22:34.214 22:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:22:34.214 22:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:34.214 22:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:34.472 22:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:34.472 22:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:34.472 22:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:34.472 22:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:34.472 22:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:34.472 22:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:34.472 22:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:34.472 22:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:34.472 22:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:34.745 request: 00:22:34.745 { 00:22:34.745 "name": "nvme0", 00:22:34.745 "trtype": "tcp", 00:22:34.745 "traddr": "10.0.0.2", 00:22:34.745 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:34.745 "adrfam": "ipv4", 00:22:34.745 "trsvcid": "4420", 00:22:34.745 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:34.745 "dhchap_key": "key3", 00:22:34.745 "method": "bdev_nvme_attach_controller", 00:22:34.745 "req_id": 1 00:22:34.745 } 00:22:34.745 Got JSON-RPC error response 00:22:34.745 response: 00:22:34.745 { 00:22:34.745 "code": -5, 00:22:34.745 "message": "Input/output error" 00:22:34.745 } 00:22:34.745 22:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:34.745 22:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:34.745 22:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:34.745 22:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:34.745 22:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:34.745 22:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:22:34.745 22:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:34.745 22:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:34.745 22:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:34.745 22:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:35.026 22:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:35.026 22:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.026 22:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.026 22:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.026 22:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:35.026 22:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.026 22:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.026 22:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.026 22:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:35.026 22:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:35.026 22:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:35.026 22:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:35.026 22:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:35.026 22:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:35.026 22:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:35.026 22:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:35.026 22:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:35.592 request: 00:22:35.592 { 00:22:35.592 "name": "nvme0", 00:22:35.592 "trtype": "tcp", 00:22:35.592 "traddr": "10.0.0.2", 00:22:35.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:35.592 "adrfam": "ipv4", 00:22:35.592 "trsvcid": "4420", 00:22:35.592 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:35.592 "dhchap_key": "key0", 00:22:35.592 "dhchap_ctrlr_key": "key1", 00:22:35.592 "method": "bdev_nvme_attach_controller", 00:22:35.592 "req_id": 1 00:22:35.592 } 00:22:35.592 Got JSON-RPC error response 00:22:35.592 response: 00:22:35.592 { 00:22:35.592 "code": -5, 00:22:35.592 "message": "Input/output error" 00:22:35.592 } 00:22:35.592 22:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:35.593 22:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:35.593 22:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:35.593 22:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:35.593 22:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:35.593 22:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:35.593 00:22:35.593 22:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:22:35.593 22:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:22:35.593 22:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.851 22:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.851 22:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.851 22:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:36.416 22:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:22:36.416 22:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:22:36.416 22:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3550315 00:22:36.416 22:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 3550315 ']' 00:22:36.416 22:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 3550315 00:22:36.416 22:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:36.416 22:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:36.416 22:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3550315 00:22:36.416 22:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:36.416 22:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:36.416 22:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3550315' 00:22:36.416 killing process with pid 3550315 00:22:36.416 22:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 3550315 00:22:36.416 22:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 3550315 00:22:36.675 22:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:36.675 22:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:36.675 22:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:36.675 22:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:36.675 22:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:36.675 22:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:36.675 22:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:36.675 rmmod nvme_tcp 00:22:36.675 rmmod nvme_fabrics 00:22:36.675 rmmod nvme_keyring 00:22:36.675 22:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:36.675 22:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:36.675 22:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:36.675 22:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3572753 ']' 00:22:36.675 22:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3572753 00:22:36.675 22:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 3572753 ']' 00:22:36.675 22:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 3572753 00:22:36.675 22:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:36.675 22:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:36.675 22:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3572753 00:22:36.675 22:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:36.675 22:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:36.675 22:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3572753' 00:22:36.675 killing process with pid 3572753 00:22:36.675 22:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 3572753 00:22:36.675 22:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 3572753 00:22:36.934 22:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:36.934 22:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:36.934 22:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:36.934 22:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:36.934 22:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:36.934 22:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.934 22:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:36.934 22:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.464 22:52:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:39.465 22:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.r6m /tmp/spdk.key-sha256.0kC /tmp/spdk.key-sha384.oSL /tmp/spdk.key-sha512.qqE /tmp/spdk.key-sha512.KIJ /tmp/spdk.key-sha384.VRj /tmp/spdk.key-sha256.LXo '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:39.465 00:22:39.465 real 3m9.006s 00:22:39.465 user 7m19.670s 00:22:39.465 sys 0m25.135s 00:22:39.465 22:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:39.465 22:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.465 ************************************ 00:22:39.465 END TEST nvmf_auth_target 00:22:39.465 ************************************ 00:22:39.465 22:52:31 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:22:39.465 22:52:31 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:39.465 22:52:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:22:39.465 22:52:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:39.465 22:52:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:39.465 ************************************ 00:22:39.465 START TEST nvmf_bdevio_no_huge 00:22:39.465 ************************************ 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:39.465 * Looking for test storage... 00:22:39.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:39.465 22:52:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:41.366 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.366 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:41.367 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:41.367 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:41.367 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:41.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:41.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:22:41.367 00:22:41.367 --- 10.0.0.2 ping statistics --- 00:22:41.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.367 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:41.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:41.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:22:41.367 00:22:41.367 --- 10.0.0.1 ping statistics --- 00:22:41.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.367 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3575537 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3575537 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 3575537 ']' 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:41.367 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:41.367 [2024-07-26 22:52:33.756343] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:41.367 [2024-07-26 22:52:33.756436] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:41.367 [2024-07-26 22:52:33.825351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:41.624 [2024-07-26 22:52:33.904992] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:41.624 [2024-07-26 22:52:33.905042] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:41.624 [2024-07-26 22:52:33.905087] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:41.624 [2024-07-26 22:52:33.905100] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:41.624 [2024-07-26 22:52:33.905110] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:41.624 [2024-07-26 22:52:33.905195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:41.624 [2024-07-26 22:52:33.905277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:22:41.624 [2024-07-26 22:52:33.905280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:41.625 [2024-07-26 22:52:33.905218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:22:41.625 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:41.625 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:22:41.625 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:41.625 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:41.625 22:52:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:41.625 22:52:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:41.625 22:52:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:41.625 22:52:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.625 22:52:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:41.625 [2024-07-26 22:52:34.027848] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:41.625 22:52:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.625 22:52:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:41.625 22:52:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.625 22:52:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:41.625 Malloc0 00:22:41.625 22:52:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.625 22:52:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:41.625 22:52:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.625 22:52:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:41.625 22:52:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.625 22:52:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:41.625 22:52:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.625 22:52:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:41.625 22:52:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.625 22:52:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:41.625 22:52:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.625 22:52:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:41.625 [2024-07-26 22:52:34.065949] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:41.625 22:52:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.625 22:52:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:41.625 22:52:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:41.625 22:52:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:22:41.625 22:52:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:22:41.625 22:52:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:41.625 22:52:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:41.625 { 00:22:41.625 "params": { 00:22:41.625 "name": "Nvme$subsystem", 00:22:41.625 "trtype": "$TEST_TRANSPORT", 00:22:41.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.625 "adrfam": "ipv4", 00:22:41.625 "trsvcid": "$NVMF_PORT", 00:22:41.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.625 "hdgst": ${hdgst:-false}, 00:22:41.625 "ddgst": ${ddgst:-false} 00:22:41.625 }, 00:22:41.625 "method": "bdev_nvme_attach_controller" 00:22:41.625 } 00:22:41.625 EOF 00:22:41.625 )") 00:22:41.625 22:52:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:22:41.625 22:52:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:22:41.625 22:52:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:22:41.625 22:52:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:41.625 "params": { 00:22:41.625 "name": "Nvme1", 00:22:41.625 "trtype": "tcp", 00:22:41.625 "traddr": "10.0.0.2", 00:22:41.625 "adrfam": "ipv4", 00:22:41.625 "trsvcid": "4420", 00:22:41.625 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:41.625 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:41.625 "hdgst": false, 00:22:41.625 "ddgst": false 00:22:41.625 }, 00:22:41.625 "method": "bdev_nvme_attach_controller" 00:22:41.625 }' 00:22:41.625 [2024-07-26 22:52:34.113028] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:41.625 [2024-07-26 22:52:34.113146] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3575562 ] 00:22:41.883 [2024-07-26 22:52:34.173580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:41.883 [2024-07-26 22:52:34.260480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:41.883 [2024-07-26 22:52:34.260531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:41.883 [2024-07-26 22:52:34.260535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.141 I/O targets: 00:22:42.141 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:42.141 00:22:42.141 00:22:42.141 CUnit - A unit testing framework for C - Version 2.1-3 00:22:42.141 http://cunit.sourceforge.net/ 00:22:42.141 00:22:42.141 00:22:42.141 Suite: bdevio tests on: Nvme1n1 00:22:42.141 Test: blockdev write read block ...passed 00:22:42.141 Test: blockdev write zeroes read block ...passed 00:22:42.141 Test: blockdev write zeroes read no split ...passed 00:22:42.141 Test: blockdev write zeroes read split ...passed 00:22:42.399 Test: blockdev write zeroes read split partial ...passed 00:22:42.399 Test: blockdev reset ...[2024-07-26 22:52:34.665569] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:42.399 [2024-07-26 22:52:34.665676] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x162c2a0 (9): Bad file descriptor 00:22:42.399 [2024-07-26 22:52:34.680322] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:42.399 passed 00:22:42.399 Test: blockdev write read 8 blocks ...passed 00:22:42.399 Test: blockdev write read size > 128k ...passed 00:22:42.399 Test: blockdev write read invalid size ...passed 00:22:42.399 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:42.399 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:42.399 Test: blockdev write read max offset ...passed 00:22:42.399 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:42.399 Test: blockdev writev readv 8 blocks ...passed 00:22:42.399 Test: blockdev writev readv 30 x 1block ...passed 00:22:42.399 Test: blockdev writev readv block ...passed 00:22:42.399 Test: blockdev writev readv size > 128k ...passed 00:22:42.399 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:42.399 Test: blockdev comparev and writev ...[2024-07-26 22:52:34.897363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:42.399 [2024-07-26 22:52:34.897403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.399 [2024-07-26 22:52:34.897429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:42.399 [2024-07-26 22:52:34.897447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.399 [2024-07-26 22:52:34.897839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:42.399 [2024-07-26 22:52:34.897875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:42.400 [2024-07-26 22:52:34.897904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:42.400 [2024-07-26 22:52:34.897922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:42.400 [2024-07-26 22:52:34.898315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:42.400 [2024-07-26 22:52:34.898352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:42.400 [2024-07-26 22:52:34.898392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:42.400 [2024-07-26 22:52:34.898420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:42.400 [2024-07-26 22:52:34.898840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:42.400 [2024-07-26 22:52:34.898900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:42.400 [2024-07-26 22:52:34.898925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:42.400 [2024-07-26 22:52:34.898943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:42.658 passed 00:22:42.658 Test: blockdev nvme passthru rw ...passed 00:22:42.658 Test: blockdev nvme passthru vendor specific ...[2024-07-26 22:52:34.983405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:42.658 [2024-07-26 22:52:34.983437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:42.658 [2024-07-26 22:52:34.983643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:42.658 [2024-07-26 22:52:34.983668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:42.658 [2024-07-26 22:52:34.983861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:42.658 [2024-07-26 22:52:34.983885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:42.658 [2024-07-26 22:52:34.984087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:42.658 [2024-07-26 22:52:34.984111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:42.658 passed 00:22:42.658 Test: blockdev nvme admin passthru ...passed 00:22:42.658 Test: blockdev copy ...passed 00:22:42.658 00:22:42.658 Run Summary: Type Total Ran Passed Failed Inactive 00:22:42.658 suites 1 1 n/a 0 0 00:22:42.658 tests 23 23 23 0 0 00:22:42.658 asserts 152 152 152 0 n/a 00:22:42.658 00:22:42.658 Elapsed time = 1.173 seconds 00:22:42.917 22:52:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:42.917 22:52:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.917 22:52:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:42.917 22:52:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.917 22:52:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:42.917 22:52:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:42.917 22:52:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:42.917 22:52:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:42.917 22:52:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:42.917 22:52:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:42.917 22:52:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:42.917 22:52:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:42.917 rmmod nvme_tcp 00:22:42.917 rmmod nvme_fabrics 00:22:42.917 rmmod nvme_keyring 00:22:42.917 22:52:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:42.917 22:52:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:42.917 22:52:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:42.917 22:52:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3575537 ']' 00:22:42.917 22:52:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3575537 00:22:42.917 22:52:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 3575537 ']' 00:22:42.917 22:52:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 3575537 00:22:42.917 22:52:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:22:42.917 22:52:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:42.917 22:52:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3575537 00:22:43.175 22:52:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:22:43.175 22:52:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:22:43.175 22:52:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3575537' 00:22:43.175 killing process with pid 3575537 00:22:43.175 22:52:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 3575537 00:22:43.175 22:52:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 3575537 00:22:43.433 22:52:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:43.433 22:52:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:43.433 22:52:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:43.433 22:52:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:43.433 22:52:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:43.433 22:52:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.433 22:52:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:43.433 22:52:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.335 22:52:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:45.335 00:22:45.335 real 0m6.352s 00:22:45.335 user 0m10.011s 00:22:45.335 sys 0m2.461s 00:22:45.335 22:52:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:45.335 22:52:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:45.335 ************************************ 00:22:45.335 END TEST nvmf_bdevio_no_huge 00:22:45.335 ************************************ 00:22:45.593 22:52:37 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:45.593 22:52:37 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:45.593 22:52:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:45.593 22:52:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:45.593 ************************************ 00:22:45.593 START TEST nvmf_tls 00:22:45.593 ************************************ 00:22:45.593 22:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:45.593 * Looking for test storage... 00:22:45.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:45.593 22:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:45.593 22:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:45.593 22:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:45.593 22:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:45.593 22:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:45.593 22:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:45.593 22:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:45.593 22:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:45.593 22:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:45.593 22:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:45.593 22:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:45.594 22:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:47.494 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:47.494 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:47.494 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:47.494 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:47.494 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:47.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:47.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:22:47.494 00:22:47.495 --- 10.0.0.2 ping statistics --- 00:22:47.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.495 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:22:47.495 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:47.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:47.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:22:47.495 00:22:47.495 --- 10.0.0.1 ping statistics --- 00:22:47.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.495 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:22:47.495 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:47.495 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:47.495 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:47.495 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:47.495 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:47.495 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:47.495 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:47.495 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:47.495 22:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:47.753 22:52:40 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:47.753 22:52:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:47.753 22:52:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:47.753 22:52:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:47.753 22:52:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3577700 00:22:47.753 22:52:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:47.753 22:52:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3577700 00:22:47.753 22:52:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3577700 ']' 00:22:47.753 22:52:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.753 22:52:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:47.753 22:52:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.753 22:52:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:47.753 22:52:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:47.753 [2024-07-26 22:52:40.056861] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:47.753 [2024-07-26 22:52:40.056941] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:47.753 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.753 [2024-07-26 22:52:40.128773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.753 [2024-07-26 22:52:40.217469] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:47.753 [2024-07-26 22:52:40.217532] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:47.753 [2024-07-26 22:52:40.217559] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:47.753 [2024-07-26 22:52:40.217572] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:47.753 [2024-07-26 22:52:40.217584] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:47.753 [2024-07-26 22:52:40.217623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:47.753 22:52:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:48.011 22:52:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:48.011 22:52:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:48.011 22:52:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:48.011 22:52:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:48.011 22:52:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.011 22:52:40 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:48.011 22:52:40 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:48.011 true 00:22:48.268 22:52:40 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:48.268 22:52:40 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:48.268 22:52:40 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:48.268 22:52:40 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:48.268 22:52:40 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:48.526 22:52:41 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:48.526 22:52:41 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:48.784 22:52:41 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:48.784 22:52:41 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:48.784 22:52:41 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:49.347 22:52:41 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:49.347 22:52:41 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:49.347 22:52:41 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:49.347 22:52:41 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:49.347 22:52:41 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:49.347 22:52:41 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:49.604 22:52:42 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:49.604 22:52:42 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:49.604 22:52:42 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:49.861 22:52:42 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:49.861 22:52:42 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:50.118 22:52:42 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:50.118 22:52:42 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:50.118 22:52:42 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:50.375 22:52:42 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:50.375 22:52:42 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:50.634 22:52:43 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:50.634 22:52:43 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:50.634 22:52:43 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:50.634 22:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:50.634 22:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:50.634 22:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:50.634 22:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:50.634 22:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:50.634 22:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:50.634 22:52:43 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:50.634 22:52:43 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:50.634 22:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:50.634 22:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:50.634 22:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:50.634 22:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:50.634 22:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:50.634 22:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:50.895 22:52:43 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:50.895 22:52:43 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:50.895 22:52:43 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.NPajcyZyKI 00:22:50.895 22:52:43 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:50.895 22:52:43 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.RmzaYDLqus 00:22:50.895 22:52:43 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:50.895 22:52:43 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:50.895 22:52:43 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.NPajcyZyKI 00:22:50.895 22:52:43 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.RmzaYDLqus 00:22:50.895 22:52:43 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:51.153 22:52:43 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:51.428 22:52:43 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.NPajcyZyKI 00:22:51.428 22:52:43 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.NPajcyZyKI 00:22:51.428 22:52:43 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:51.716 [2024-07-26 22:52:44.009107] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.716 22:52:44 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:51.973 22:52:44 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:52.230 [2024-07-26 22:52:44.554603] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:52.230 [2024-07-26 22:52:44.554862] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.230 22:52:44 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:52.488 malloc0 00:22:52.488 22:52:44 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:52.745 22:52:45 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NPajcyZyKI 00:22:53.003 [2024-07-26 22:52:45.369410] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:53.003 22:52:45 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.NPajcyZyKI 00:22:53.003 EAL: No free 2048 kB hugepages reported on node 1 00:23:05.203 Initializing NVMe Controllers 00:23:05.203 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:05.203 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:05.204 Initialization complete. Launching workers. 00:23:05.204 ======================================================== 00:23:05.204 Latency(us) 00:23:05.204 Device Information : IOPS MiB/s Average min max 00:23:05.204 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7715.26 30.14 8298.00 1117.67 9634.63 00:23:05.204 ======================================================== 00:23:05.204 Total : 7715.26 30.14 8298.00 1117.67 9634.63 00:23:05.204 00:23:05.204 22:52:55 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NPajcyZyKI 00:23:05.204 22:52:55 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:05.204 22:52:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:05.204 22:52:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:05.204 22:52:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.NPajcyZyKI' 00:23:05.204 22:52:55 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:05.204 22:52:55 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3579514 00:23:05.204 22:52:55 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:05.204 22:52:55 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3579514 /var/tmp/bdevperf.sock 00:23:05.204 22:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3579514 ']' 00:23:05.204 22:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:05.204 22:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:05.204 22:52:55 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:05.204 22:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:05.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:05.204 22:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:05.204 22:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.204 [2024-07-26 22:52:55.536180] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:05.204 [2024-07-26 22:52:55.536256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3579514 ] 00:23:05.204 EAL: No free 2048 kB hugepages reported on node 1 00:23:05.204 [2024-07-26 22:52:55.596299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.204 [2024-07-26 22:52:55.682022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:05.204 22:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:05.204 22:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:05.204 22:52:55 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NPajcyZyKI 00:23:05.204 [2024-07-26 22:52:56.068207] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:05.204 [2024-07-26 22:52:56.068340] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:05.204 TLSTESTn1 00:23:05.204 22:52:56 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:05.204 Running I/O for 10 seconds... 00:23:15.173 00:23:15.173 Latency(us) 00:23:15.173 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.173 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:15.173 Verification LBA range: start 0x0 length 0x2000 00:23:15.173 TLSTESTn1 : 10.07 1281.10 5.00 0.00 0.00 99616.11 6213.78 86992.97 00:23:15.173 =================================================================================================================== 00:23:15.173 Total : 1281.10 5.00 0.00 0.00 99616.11 6213.78 86992.97 00:23:15.173 0 00:23:15.173 22:53:06 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:15.173 22:53:06 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3579514 00:23:15.173 22:53:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3579514 ']' 00:23:15.173 22:53:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3579514 00:23:15.173 22:53:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:15.173 22:53:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:15.173 22:53:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3579514 00:23:15.173 22:53:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:15.173 22:53:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:15.173 22:53:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3579514' 00:23:15.173 killing process with pid 3579514 00:23:15.173 22:53:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3579514 00:23:15.173 Received shutdown signal, test time was about 10.000000 seconds 00:23:15.173 00:23:15.173 Latency(us) 00:23:15.173 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.173 =================================================================================================================== 00:23:15.173 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:15.173 [2024-07-26 22:53:06.420416] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:15.173 22:53:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3579514 00:23:15.173 22:53:06 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RmzaYDLqus 00:23:15.173 22:53:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:15.173 22:53:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RmzaYDLqus 00:23:15.173 22:53:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:15.173 22:53:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:15.173 22:53:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:15.173 22:53:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:15.173 22:53:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RmzaYDLqus 00:23:15.173 22:53:06 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:15.173 22:53:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:15.173 22:53:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:15.173 22:53:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.RmzaYDLqus' 00:23:15.173 22:53:06 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:15.173 22:53:06 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3580824 00:23:15.173 22:53:06 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:15.173 22:53:06 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:15.173 22:53:06 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3580824 /var/tmp/bdevperf.sock 00:23:15.173 22:53:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3580824 ']' 00:23:15.173 22:53:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:15.173 22:53:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:15.173 22:53:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:15.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:15.173 22:53:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:15.173 22:53:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.173 [2024-07-26 22:53:06.695078] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:15.174 [2024-07-26 22:53:06.695158] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3580824 ] 00:23:15.174 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.174 [2024-07-26 22:53:06.755445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.174 [2024-07-26 22:53:06.840591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:15.174 22:53:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:15.174 22:53:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:15.174 22:53:06 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RmzaYDLqus 00:23:15.174 [2024-07-26 22:53:07.174697] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:15.174 [2024-07-26 22:53:07.174870] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:15.174 [2024-07-26 22:53:07.183484] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:15.174 [2024-07-26 22:53:07.183841] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb73840 (107): Transport endpoint is not connected 00:23:15.174 [2024-07-26 22:53:07.184830] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb73840 (9): Bad file descriptor 00:23:15.174 [2024-07-26 22:53:07.185829] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:15.174 [2024-07-26 22:53:07.185848] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:15.174 [2024-07-26 22:53:07.185880] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:15.174 request: 00:23:15.174 { 00:23:15.174 "name": "TLSTEST", 00:23:15.174 "trtype": "tcp", 00:23:15.174 "traddr": "10.0.0.2", 00:23:15.174 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:15.174 "adrfam": "ipv4", 00:23:15.174 "trsvcid": "4420", 00:23:15.174 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.174 "psk": "/tmp/tmp.RmzaYDLqus", 00:23:15.174 "method": "bdev_nvme_attach_controller", 00:23:15.174 "req_id": 1 00:23:15.174 } 00:23:15.174 Got JSON-RPC error response 00:23:15.174 response: 00:23:15.174 { 00:23:15.174 "code": -5, 00:23:15.174 "message": "Input/output error" 00:23:15.174 } 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3580824 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3580824 ']' 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3580824 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3580824 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3580824' 00:23:15.174 killing process with pid 3580824 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3580824 00:23:15.174 Received shutdown signal, test time was about 10.000000 seconds 00:23:15.174 00:23:15.174 Latency(us) 00:23:15.174 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.174 =================================================================================================================== 00:23:15.174 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:15.174 [2024-07-26 22:53:07.236119] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3580824 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NPajcyZyKI 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NPajcyZyKI 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NPajcyZyKI 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.NPajcyZyKI' 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3580858 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3580858 /var/tmp/bdevperf.sock 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3580858 ']' 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:15.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:15.174 22:53:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.174 [2024-07-26 22:53:07.497821] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:15.174 [2024-07-26 22:53:07.497900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3580858 ] 00:23:15.174 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.174 [2024-07-26 22:53:07.558138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.174 [2024-07-26 22:53:07.641914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:15.433 22:53:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:15.433 22:53:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:15.433 22:53:07 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.NPajcyZyKI 00:23:15.691 [2024-07-26 22:53:07.966129] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:15.692 [2024-07-26 22:53:07.966262] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:15.692 [2024-07-26 22:53:07.975448] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:15.692 [2024-07-26 22:53:07.975480] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:15.692 [2024-07-26 22:53:07.975534] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:15.692 [2024-07-26 22:53:07.976075] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223a840 (107): Transport endpoint is not connected 00:23:15.692 [2024-07-26 22:53:07.977064] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223a840 (9): Bad file descriptor 00:23:15.692 [2024-07-26 22:53:07.978057] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:15.692 [2024-07-26 22:53:07.978083] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:15.692 [2024-07-26 22:53:07.978101] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:15.692 request: 00:23:15.692 { 00:23:15.692 "name": "TLSTEST", 00:23:15.692 "trtype": "tcp", 00:23:15.692 "traddr": "10.0.0.2", 00:23:15.692 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:15.692 "adrfam": "ipv4", 00:23:15.692 "trsvcid": "4420", 00:23:15.692 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.692 "psk": "/tmp/tmp.NPajcyZyKI", 00:23:15.692 "method": "bdev_nvme_attach_controller", 00:23:15.692 "req_id": 1 00:23:15.692 } 00:23:15.692 Got JSON-RPC error response 00:23:15.692 response: 00:23:15.692 { 00:23:15.692 "code": -5, 00:23:15.692 "message": "Input/output error" 00:23:15.692 } 00:23:15.692 22:53:07 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3580858 00:23:15.692 22:53:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3580858 ']' 00:23:15.692 22:53:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3580858 00:23:15.692 22:53:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:15.692 22:53:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:15.692 22:53:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3580858 00:23:15.692 22:53:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:15.692 22:53:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:15.692 22:53:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3580858' 00:23:15.692 killing process with pid 3580858 00:23:15.692 22:53:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3580858 00:23:15.692 Received shutdown signal, test time was about 10.000000 seconds 00:23:15.692 00:23:15.692 Latency(us) 00:23:15.692 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.692 =================================================================================================================== 00:23:15.692 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:15.692 [2024-07-26 22:53:08.030247] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:15.692 22:53:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3580858 00:23:15.951 22:53:08 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:15.951 22:53:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:15.951 22:53:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:15.951 22:53:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:15.951 22:53:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:15.951 22:53:08 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NPajcyZyKI 00:23:15.951 22:53:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:15.951 22:53:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NPajcyZyKI 00:23:15.951 22:53:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:15.951 22:53:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:15.951 22:53:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:15.951 22:53:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:15.951 22:53:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NPajcyZyKI 00:23:15.951 22:53:08 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:15.951 22:53:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:15.951 22:53:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:15.951 22:53:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.NPajcyZyKI' 00:23:15.951 22:53:08 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:15.951 22:53:08 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3580985 00:23:15.951 22:53:08 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:15.951 22:53:08 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:15.951 22:53:08 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3580985 /var/tmp/bdevperf.sock 00:23:15.951 22:53:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3580985 ']' 00:23:15.951 22:53:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:15.951 22:53:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:15.951 22:53:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:15.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:15.951 22:53:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:15.951 22:53:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.951 [2024-07-26 22:53:08.268868] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:15.951 [2024-07-26 22:53:08.268942] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3580985 ] 00:23:15.951 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.951 [2024-07-26 22:53:08.326477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.951 [2024-07-26 22:53:08.410142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.209 22:53:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:16.209 22:53:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:16.209 22:53:08 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NPajcyZyKI 00:23:16.468 [2024-07-26 22:53:08.745685] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:16.468 [2024-07-26 22:53:08.745802] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:16.468 [2024-07-26 22:53:08.757445] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:16.468 [2024-07-26 22:53:08.757475] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:16.468 [2024-07-26 22:53:08.757526] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:16.468 [2024-07-26 22:53:08.757727] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a840 (107): Transport endpoint is not connected 00:23:16.468 [2024-07-26 22:53:08.758713] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a840 (9): Bad file descriptor 00:23:16.468 [2024-07-26 22:53:08.759713] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:16.468 [2024-07-26 22:53:08.759732] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:16.468 [2024-07-26 22:53:08.759763] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:16.468 request: 00:23:16.468 { 00:23:16.468 "name": "TLSTEST", 00:23:16.468 "trtype": "tcp", 00:23:16.468 "traddr": "10.0.0.2", 00:23:16.468 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:16.468 "adrfam": "ipv4", 00:23:16.468 "trsvcid": "4420", 00:23:16.468 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:16.468 "psk": "/tmp/tmp.NPajcyZyKI", 00:23:16.468 "method": "bdev_nvme_attach_controller", 00:23:16.468 "req_id": 1 00:23:16.468 } 00:23:16.468 Got JSON-RPC error response 00:23:16.468 response: 00:23:16.468 { 00:23:16.468 "code": -5, 00:23:16.468 "message": "Input/output error" 00:23:16.468 } 00:23:16.468 22:53:08 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3580985 00:23:16.468 22:53:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3580985 ']' 00:23:16.468 22:53:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3580985 00:23:16.468 22:53:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:16.468 22:53:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:16.468 22:53:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3580985 00:23:16.468 22:53:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:16.468 22:53:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:16.468 22:53:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3580985' 00:23:16.468 killing process with pid 3580985 00:23:16.468 22:53:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3580985 00:23:16.468 Received shutdown signal, test time was about 10.000000 seconds 00:23:16.468 00:23:16.468 Latency(us) 00:23:16.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.468 =================================================================================================================== 00:23:16.468 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:16.468 [2024-07-26 22:53:08.801434] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:16.468 22:53:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3580985 00:23:16.727 22:53:09 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:16.727 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:16.727 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:16.727 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:16.727 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:16.727 22:53:09 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:16.727 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:16.727 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:16.727 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:16.727 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:16.727 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:16.727 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:16.727 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:16.727 22:53:09 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:16.727 22:53:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:16.727 22:53:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:16.727 22:53:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:16.727 22:53:09 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:16.727 22:53:09 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3581118 00:23:16.727 22:53:09 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:16.727 22:53:09 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:16.727 22:53:09 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3581118 /var/tmp/bdevperf.sock 00:23:16.727 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3581118 ']' 00:23:16.727 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:16.727 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:16.727 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:16.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:16.727 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:16.727 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.727 [2024-07-26 22:53:09.062904] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:16.727 [2024-07-26 22:53:09.062978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3581118 ] 00:23:16.727 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.727 [2024-07-26 22:53:09.119574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.727 [2024-07-26 22:53:09.199782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.985 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:16.985 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:16.985 22:53:09 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:17.244 [2024-07-26 22:53:09.529618] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:17.244 [2024-07-26 22:53:09.531370] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cff10 (9): Bad file descriptor 00:23:17.244 [2024-07-26 22:53:09.532366] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:17.244 [2024-07-26 22:53:09.532394] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:17.244 [2024-07-26 22:53:09.532426] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:17.244 request: 00:23:17.244 { 00:23:17.244 "name": "TLSTEST", 00:23:17.244 "trtype": "tcp", 00:23:17.244 "traddr": "10.0.0.2", 00:23:17.244 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:17.244 "adrfam": "ipv4", 00:23:17.244 "trsvcid": "4420", 00:23:17.244 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.244 "method": "bdev_nvme_attach_controller", 00:23:17.244 "req_id": 1 00:23:17.244 } 00:23:17.244 Got JSON-RPC error response 00:23:17.244 response: 00:23:17.244 { 00:23:17.244 "code": -5, 00:23:17.244 "message": "Input/output error" 00:23:17.244 } 00:23:17.244 22:53:09 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3581118 00:23:17.244 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3581118 ']' 00:23:17.244 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3581118 00:23:17.244 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:17.244 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:17.244 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3581118 00:23:17.244 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:17.244 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:17.244 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3581118' 00:23:17.244 killing process with pid 3581118 00:23:17.244 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3581118 00:23:17.244 Received shutdown signal, test time was about 10.000000 seconds 00:23:17.244 00:23:17.244 Latency(us) 00:23:17.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.244 =================================================================================================================== 00:23:17.244 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:17.244 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3581118 00:23:17.503 22:53:09 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:17.503 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:17.503 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:17.503 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:17.503 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:17.503 22:53:09 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 3577700 00:23:17.503 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3577700 ']' 00:23:17.503 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3577700 00:23:17.503 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:17.503 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:17.503 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3577700 00:23:17.503 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:17.503 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:17.503 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3577700' 00:23:17.503 killing process with pid 3577700 00:23:17.503 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3577700 00:23:17.503 [2024-07-26 22:53:09.821469] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:17.503 22:53:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3577700 00:23:17.762 22:53:10 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:17.762 22:53:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:17.762 22:53:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:17.762 22:53:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:17.762 22:53:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:17.762 22:53:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:23:17.762 22:53:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:17.762 22:53:10 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:17.762 22:53:10 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:23:17.762 22:53:10 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.CMbDqMk05Q 00:23:17.762 22:53:10 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:17.762 22:53:10 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.CMbDqMk05Q 00:23:17.762 22:53:10 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:23:17.762 22:53:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:17.762 22:53:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:17.762 22:53:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.762 22:53:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3581267 00:23:17.762 22:53:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:17.762 22:53:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3581267 00:23:17.762 22:53:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3581267 ']' 00:23:17.762 22:53:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.762 22:53:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:17.762 22:53:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.762 22:53:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:17.762 22:53:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.762 [2024-07-26 22:53:10.185706] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:17.762 [2024-07-26 22:53:10.185783] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.762 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.762 [2024-07-26 22:53:10.247486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.021 [2024-07-26 22:53:10.331027] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:18.021 [2024-07-26 22:53:10.331089] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:18.021 [2024-07-26 22:53:10.331119] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:18.021 [2024-07-26 22:53:10.331132] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:18.021 [2024-07-26 22:53:10.331142] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:18.021 [2024-07-26 22:53:10.331169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:18.021 22:53:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:18.021 22:53:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:18.021 22:53:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:18.021 22:53:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:18.021 22:53:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.021 22:53:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.021 22:53:10 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.CMbDqMk05Q 00:23:18.021 22:53:10 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.CMbDqMk05Q 00:23:18.021 22:53:10 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:18.279 [2024-07-26 22:53:10.679212] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.279 22:53:10 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:18.536 22:53:10 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:18.793 [2024-07-26 22:53:11.176578] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:18.793 [2024-07-26 22:53:11.176861] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:18.793 22:53:11 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:19.050 malloc0 00:23:19.050 22:53:11 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:19.309 22:53:11 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.CMbDqMk05Q 00:23:19.567 [2024-07-26 22:53:11.934233] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:19.567 22:53:11 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CMbDqMk05Q 00:23:19.568 22:53:11 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:19.568 22:53:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:19.568 22:53:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:19.568 22:53:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.CMbDqMk05Q' 00:23:19.568 22:53:11 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:19.568 22:53:11 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3581436 00:23:19.568 22:53:11 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:19.568 22:53:11 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3581436 /var/tmp/bdevperf.sock 00:23:19.568 22:53:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3581436 ']' 00:23:19.568 22:53:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:19.568 22:53:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:19.568 22:53:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:19.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:19.568 22:53:11 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:19.568 22:53:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:19.568 22:53:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.568 [2024-07-26 22:53:11.996017] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:19.568 [2024-07-26 22:53:11.996114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3581436 ] 00:23:19.568 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.568 [2024-07-26 22:53:12.058088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.825 [2024-07-26 22:53:12.146219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:19.825 22:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:19.825 22:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:19.825 22:53:12 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.CMbDqMk05Q 00:23:20.083 [2024-07-26 22:53:12.478912] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:20.083 [2024-07-26 22:53:12.479041] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:20.083 TLSTESTn1 00:23:20.083 22:53:12 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:20.341 Running I/O for 10 seconds... 00:23:30.310 00:23:30.310 Latency(us) 00:23:30.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.310 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:30.310 Verification LBA range: start 0x0 length 0x2000 00:23:30.310 TLSTESTn1 : 10.06 2180.15 8.52 0.00 0.00 58545.17 7961.41 104080.88 00:23:30.310 =================================================================================================================== 00:23:30.310 Total : 2180.15 8.52 0.00 0.00 58545.17 7961.41 104080.88 00:23:30.310 0 00:23:30.310 22:53:22 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:30.310 22:53:22 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3581436 00:23:30.310 22:53:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3581436 ']' 00:23:30.310 22:53:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3581436 00:23:30.310 22:53:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:30.310 22:53:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:30.310 22:53:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3581436 00:23:30.569 22:53:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:30.569 22:53:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:30.569 22:53:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3581436' 00:23:30.569 killing process with pid 3581436 00:23:30.569 22:53:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3581436 00:23:30.569 Received shutdown signal, test time was about 10.000000 seconds 00:23:30.569 00:23:30.569 Latency(us) 00:23:30.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.569 =================================================================================================================== 00:23:30.569 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:30.569 [2024-07-26 22:53:22.814443] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:30.569 22:53:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3581436 00:23:30.569 22:53:23 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.CMbDqMk05Q 00:23:30.569 22:53:23 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CMbDqMk05Q 00:23:30.569 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:30.569 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CMbDqMk05Q 00:23:30.569 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:30.569 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:30.569 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:30.569 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:30.569 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CMbDqMk05Q 00:23:30.569 22:53:23 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:30.569 22:53:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:30.569 22:53:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:30.569 22:53:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.CMbDqMk05Q' 00:23:30.569 22:53:23 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:30.569 22:53:23 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3582745 00:23:30.569 22:53:23 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:30.569 22:53:23 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:30.569 22:53:23 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3582745 /var/tmp/bdevperf.sock 00:23:30.569 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3582745 ']' 00:23:30.569 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:30.569 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:30.569 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:30.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:30.569 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:30.569 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.828 [2024-07-26 22:53:23.088420] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:30.828 [2024-07-26 22:53:23.088494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3582745 ] 00:23:30.828 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.828 [2024-07-26 22:53:23.146314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.828 [2024-07-26 22:53:23.232562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:31.086 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:31.086 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:31.086 22:53:23 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.CMbDqMk05Q 00:23:31.086 [2024-07-26 22:53:23.579241] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:31.086 [2024-07-26 22:53:23.579321] bdev_nvme.c:6122:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:31.086 [2024-07-26 22:53:23.579337] bdev_nvme.c:6231:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.CMbDqMk05Q 00:23:31.086 request: 00:23:31.086 { 00:23:31.086 "name": "TLSTEST", 00:23:31.086 "trtype": "tcp", 00:23:31.086 "traddr": "10.0.0.2", 00:23:31.086 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:31.086 "adrfam": "ipv4", 00:23:31.086 "trsvcid": "4420", 00:23:31.086 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.086 "psk": "/tmp/tmp.CMbDqMk05Q", 00:23:31.086 "method": "bdev_nvme_attach_controller", 00:23:31.086 "req_id": 1 00:23:31.086 } 00:23:31.086 Got JSON-RPC error response 00:23:31.086 response: 00:23:31.086 { 00:23:31.086 "code": -1, 00:23:31.086 "message": "Operation not permitted" 00:23:31.086 } 00:23:31.344 22:53:23 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3582745 00:23:31.344 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3582745 ']' 00:23:31.344 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3582745 00:23:31.344 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:31.344 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:31.344 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3582745 00:23:31.344 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:31.344 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:31.344 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3582745' 00:23:31.344 killing process with pid 3582745 00:23:31.344 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3582745 00:23:31.344 Received shutdown signal, test time was about 10.000000 seconds 00:23:31.344 00:23:31.344 Latency(us) 00:23:31.344 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.344 =================================================================================================================== 00:23:31.344 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:31.344 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3582745 00:23:31.344 22:53:23 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:31.344 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:31.344 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:31.344 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:31.601 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:31.601 22:53:23 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 3581267 00:23:31.601 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3581267 ']' 00:23:31.601 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3581267 00:23:31.601 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:31.601 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:31.601 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3581267 00:23:31.601 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:31.601 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:31.601 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3581267' 00:23:31.601 killing process with pid 3581267 00:23:31.601 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3581267 00:23:31.601 [2024-07-26 22:53:23.875713] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:31.601 22:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3581267 00:23:31.858 22:53:24 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:31.858 22:53:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:31.858 22:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:31.858 22:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.858 22:53:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3582888 00:23:31.858 22:53:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:31.858 22:53:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3582888 00:23:31.858 22:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3582888 ']' 00:23:31.858 22:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:31.858 22:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:31.858 22:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:31.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:31.858 22:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:31.858 22:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.858 [2024-07-26 22:53:24.184906] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:31.858 [2024-07-26 22:53:24.184985] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:31.858 EAL: No free 2048 kB hugepages reported on node 1 00:23:31.858 [2024-07-26 22:53:24.252129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.858 [2024-07-26 22:53:24.339219] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:31.858 [2024-07-26 22:53:24.339283] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:31.858 [2024-07-26 22:53:24.339300] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:31.858 [2024-07-26 22:53:24.339314] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:31.858 [2024-07-26 22:53:24.339326] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:31.858 [2024-07-26 22:53:24.339358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:32.115 22:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:32.115 22:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:32.115 22:53:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:32.115 22:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:32.115 22:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.115 22:53:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.115 22:53:24 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.CMbDqMk05Q 00:23:32.115 22:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:32.115 22:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.CMbDqMk05Q 00:23:32.115 22:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:23:32.115 22:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:32.115 22:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:23:32.115 22:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:32.115 22:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.CMbDqMk05Q 00:23:32.115 22:53:24 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.CMbDqMk05Q 00:23:32.115 22:53:24 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:32.373 [2024-07-26 22:53:24.727452] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.373 22:53:24 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:32.631 22:53:24 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:32.889 [2024-07-26 22:53:25.256917] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:32.889 [2024-07-26 22:53:25.257200] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:32.889 22:53:25 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:33.147 malloc0 00:23:33.147 22:53:25 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:33.405 22:53:25 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.CMbDqMk05Q 00:23:33.663 [2024-07-26 22:53:26.017314] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:33.663 [2024-07-26 22:53:26.017352] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:33.663 [2024-07-26 22:53:26.017388] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:33.663 request: 00:23:33.663 { 00:23:33.663 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.663 "host": "nqn.2016-06.io.spdk:host1", 00:23:33.663 "psk": "/tmp/tmp.CMbDqMk05Q", 00:23:33.663 "method": "nvmf_subsystem_add_host", 00:23:33.663 "req_id": 1 00:23:33.663 } 00:23:33.663 Got JSON-RPC error response 00:23:33.663 response: 00:23:33.663 { 00:23:33.663 "code": -32603, 00:23:33.663 "message": "Internal error" 00:23:33.663 } 00:23:33.663 22:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:33.663 22:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:33.663 22:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:33.663 22:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:33.663 22:53:26 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 3582888 00:23:33.663 22:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3582888 ']' 00:23:33.663 22:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3582888 00:23:33.663 22:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:33.663 22:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:33.663 22:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3582888 00:23:33.663 22:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:33.663 22:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:33.663 22:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3582888' 00:23:33.663 killing process with pid 3582888 00:23:33.663 22:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3582888 00:23:33.663 22:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3582888 00:23:33.921 22:53:26 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.CMbDqMk05Q 00:23:33.921 22:53:26 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:33.921 22:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:33.921 22:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:33.921 22:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.921 22:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3583179 00:23:33.921 22:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:33.921 22:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3583179 00:23:33.921 22:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3583179 ']' 00:23:33.921 22:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.921 22:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:33.921 22:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.921 22:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:33.921 22:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.921 [2024-07-26 22:53:26.381810] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:33.921 [2024-07-26 22:53:26.381891] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.921 EAL: No free 2048 kB hugepages reported on node 1 00:23:34.179 [2024-07-26 22:53:26.455885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.179 [2024-07-26 22:53:26.551356] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:34.179 [2024-07-26 22:53:26.551416] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:34.179 [2024-07-26 22:53:26.551446] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:34.179 [2024-07-26 22:53:26.551458] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:34.179 [2024-07-26 22:53:26.551468] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:34.179 [2024-07-26 22:53:26.551494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.179 22:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:34.179 22:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:34.179 22:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:34.179 22:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:34.179 22:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.437 22:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:34.437 22:53:26 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.CMbDqMk05Q 00:23:34.437 22:53:26 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.CMbDqMk05Q 00:23:34.437 22:53:26 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:34.437 [2024-07-26 22:53:26.915878] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.437 22:53:26 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:34.695 22:53:27 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:34.953 [2024-07-26 22:53:27.429328] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:34.953 [2024-07-26 22:53:27.429619] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.953 22:53:27 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:35.211 malloc0 00:23:35.211 22:53:27 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:35.469 22:53:27 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.CMbDqMk05Q 00:23:35.727 [2024-07-26 22:53:28.179002] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:35.727 22:53:28 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3583463 00:23:35.727 22:53:28 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:35.727 22:53:28 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:35.727 22:53:28 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3583463 /var/tmp/bdevperf.sock 00:23:35.727 22:53:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3583463 ']' 00:23:35.727 22:53:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:35.727 22:53:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:35.727 22:53:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:35.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:35.727 22:53:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:35.727 22:53:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.985 [2024-07-26 22:53:28.235920] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:35.985 [2024-07-26 22:53:28.236006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3583463 ] 00:23:35.985 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.985 [2024-07-26 22:53:28.293151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.985 [2024-07-26 22:53:28.380331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:35.985 22:53:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:35.985 22:53:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:35.985 22:53:28 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.CMbDqMk05Q 00:23:36.551 [2024-07-26 22:53:28.756214] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:36.551 [2024-07-26 22:53:28.756344] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:36.551 TLSTESTn1 00:23:36.551 22:53:28 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:36.809 22:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:23:36.809 "subsystems": [ 00:23:36.809 { 00:23:36.809 "subsystem": "keyring", 00:23:36.809 "config": [] 00:23:36.809 }, 00:23:36.809 { 00:23:36.809 "subsystem": "iobuf", 00:23:36.809 "config": [ 00:23:36.809 { 00:23:36.809 "method": "iobuf_set_options", 00:23:36.809 "params": { 00:23:36.809 "small_pool_count": 8192, 00:23:36.809 "large_pool_count": 1024, 00:23:36.809 "small_bufsize": 8192, 00:23:36.809 "large_bufsize": 135168 00:23:36.809 } 00:23:36.809 } 00:23:36.809 ] 00:23:36.809 }, 00:23:36.809 { 00:23:36.809 "subsystem": "sock", 00:23:36.809 "config": [ 00:23:36.809 { 00:23:36.809 "method": "sock_set_default_impl", 00:23:36.809 "params": { 00:23:36.809 "impl_name": "posix" 00:23:36.809 } 00:23:36.809 }, 00:23:36.809 { 00:23:36.809 "method": "sock_impl_set_options", 00:23:36.809 "params": { 00:23:36.809 "impl_name": "ssl", 00:23:36.809 "recv_buf_size": 4096, 00:23:36.809 "send_buf_size": 4096, 00:23:36.809 "enable_recv_pipe": true, 00:23:36.809 "enable_quickack": false, 00:23:36.809 "enable_placement_id": 0, 00:23:36.809 "enable_zerocopy_send_server": true, 00:23:36.809 "enable_zerocopy_send_client": false, 00:23:36.809 "zerocopy_threshold": 0, 00:23:36.809 "tls_version": 0, 00:23:36.809 "enable_ktls": false 00:23:36.809 } 00:23:36.809 }, 00:23:36.809 { 00:23:36.809 "method": "sock_impl_set_options", 00:23:36.809 "params": { 00:23:36.809 "impl_name": "posix", 00:23:36.809 "recv_buf_size": 2097152, 00:23:36.809 "send_buf_size": 2097152, 00:23:36.809 "enable_recv_pipe": true, 00:23:36.809 "enable_quickack": false, 00:23:36.809 "enable_placement_id": 0, 00:23:36.809 "enable_zerocopy_send_server": true, 00:23:36.809 "enable_zerocopy_send_client": false, 00:23:36.809 "zerocopy_threshold": 0, 00:23:36.809 "tls_version": 0, 00:23:36.809 "enable_ktls": false 00:23:36.809 } 00:23:36.809 } 00:23:36.809 ] 00:23:36.809 }, 00:23:36.809 { 00:23:36.809 "subsystem": "vmd", 00:23:36.809 "config": [] 00:23:36.810 }, 00:23:36.810 { 00:23:36.810 "subsystem": "accel", 00:23:36.810 "config": [ 00:23:36.810 { 00:23:36.810 "method": "accel_set_options", 00:23:36.810 "params": { 00:23:36.810 "small_cache_size": 128, 00:23:36.810 "large_cache_size": 16, 00:23:36.810 "task_count": 2048, 00:23:36.810 "sequence_count": 2048, 00:23:36.810 "buf_count": 2048 00:23:36.810 } 00:23:36.810 } 00:23:36.810 ] 00:23:36.810 }, 00:23:36.810 { 00:23:36.810 "subsystem": "bdev", 00:23:36.810 "config": [ 00:23:36.810 { 00:23:36.810 "method": "bdev_set_options", 00:23:36.810 "params": { 00:23:36.810 "bdev_io_pool_size": 65535, 00:23:36.810 "bdev_io_cache_size": 256, 00:23:36.810 "bdev_auto_examine": true, 00:23:36.810 "iobuf_small_cache_size": 128, 00:23:36.810 "iobuf_large_cache_size": 16 00:23:36.810 } 00:23:36.810 }, 00:23:36.810 { 00:23:36.810 "method": "bdev_raid_set_options", 00:23:36.810 "params": { 00:23:36.810 "process_window_size_kb": 1024 00:23:36.810 } 00:23:36.810 }, 00:23:36.810 { 00:23:36.810 "method": "bdev_iscsi_set_options", 00:23:36.810 "params": { 00:23:36.810 "timeout_sec": 30 00:23:36.810 } 00:23:36.810 }, 00:23:36.810 { 00:23:36.810 "method": "bdev_nvme_set_options", 00:23:36.810 "params": { 00:23:36.810 "action_on_timeout": "none", 00:23:36.810 "timeout_us": 0, 00:23:36.810 "timeout_admin_us": 0, 00:23:36.810 "keep_alive_timeout_ms": 10000, 00:23:36.810 "arbitration_burst": 0, 00:23:36.810 "low_priority_weight": 0, 00:23:36.810 "medium_priority_weight": 0, 00:23:36.810 "high_priority_weight": 0, 00:23:36.810 "nvme_adminq_poll_period_us": 10000, 00:23:36.810 "nvme_ioq_poll_period_us": 0, 00:23:36.810 "io_queue_requests": 0, 00:23:36.810 "delay_cmd_submit": true, 00:23:36.810 "transport_retry_count": 4, 00:23:36.810 "bdev_retry_count": 3, 00:23:36.810 "transport_ack_timeout": 0, 00:23:36.810 "ctrlr_loss_timeout_sec": 0, 00:23:36.810 "reconnect_delay_sec": 0, 00:23:36.810 "fast_io_fail_timeout_sec": 0, 00:23:36.810 "disable_auto_failback": false, 00:23:36.810 "generate_uuids": false, 00:23:36.810 "transport_tos": 0, 00:23:36.810 "nvme_error_stat": false, 00:23:36.810 "rdma_srq_size": 0, 00:23:36.810 "io_path_stat": false, 00:23:36.810 "allow_accel_sequence": false, 00:23:36.810 "rdma_max_cq_size": 0, 00:23:36.810 "rdma_cm_event_timeout_ms": 0, 00:23:36.810 "dhchap_digests": [ 00:23:36.810 "sha256", 00:23:36.810 "sha384", 00:23:36.810 "sha512" 00:23:36.810 ], 00:23:36.810 "dhchap_dhgroups": [ 00:23:36.810 "null", 00:23:36.810 "ffdhe2048", 00:23:36.810 "ffdhe3072", 00:23:36.810 "ffdhe4096", 00:23:36.810 "ffdhe6144", 00:23:36.810 "ffdhe8192" 00:23:36.810 ] 00:23:36.810 } 00:23:36.810 }, 00:23:36.810 { 00:23:36.810 "method": "bdev_nvme_set_hotplug", 00:23:36.810 "params": { 00:23:36.810 "period_us": 100000, 00:23:36.810 "enable": false 00:23:36.810 } 00:23:36.810 }, 00:23:36.810 { 00:23:36.810 "method": "bdev_malloc_create", 00:23:36.810 "params": { 00:23:36.810 "name": "malloc0", 00:23:36.810 "num_blocks": 8192, 00:23:36.810 "block_size": 4096, 00:23:36.810 "physical_block_size": 4096, 00:23:36.810 "uuid": "1d4268ce-1838-442e-b2da-f11ae3c61c90", 00:23:36.810 "optimal_io_boundary": 0 00:23:36.810 } 00:23:36.810 }, 00:23:36.810 { 00:23:36.810 "method": "bdev_wait_for_examine" 00:23:36.810 } 00:23:36.810 ] 00:23:36.810 }, 00:23:36.810 { 00:23:36.810 "subsystem": "nbd", 00:23:36.810 "config": [] 00:23:36.810 }, 00:23:36.810 { 00:23:36.810 "subsystem": "scheduler", 00:23:36.810 "config": [ 00:23:36.810 { 00:23:36.810 "method": "framework_set_scheduler", 00:23:36.810 "params": { 00:23:36.810 "name": "static" 00:23:36.810 } 00:23:36.810 } 00:23:36.810 ] 00:23:36.810 }, 00:23:36.810 { 00:23:36.810 "subsystem": "nvmf", 00:23:36.810 "config": [ 00:23:36.810 { 00:23:36.810 "method": "nvmf_set_config", 00:23:36.810 "params": { 00:23:36.810 "discovery_filter": "match_any", 00:23:36.810 "admin_cmd_passthru": { 00:23:36.810 "identify_ctrlr": false 00:23:36.810 } 00:23:36.810 } 00:23:36.810 }, 00:23:36.810 { 00:23:36.810 "method": "nvmf_set_max_subsystems", 00:23:36.810 "params": { 00:23:36.810 "max_subsystems": 1024 00:23:36.810 } 00:23:36.810 }, 00:23:36.810 { 00:23:36.810 "method": "nvmf_set_crdt", 00:23:36.810 "params": { 00:23:36.810 "crdt1": 0, 00:23:36.810 "crdt2": 0, 00:23:36.810 "crdt3": 0 00:23:36.810 } 00:23:36.810 }, 00:23:36.810 { 00:23:36.810 "method": "nvmf_create_transport", 00:23:36.810 "params": { 00:23:36.810 "trtype": "TCP", 00:23:36.810 "max_queue_depth": 128, 00:23:36.810 "max_io_qpairs_per_ctrlr": 127, 00:23:36.810 "in_capsule_data_size": 4096, 00:23:36.810 "max_io_size": 131072, 00:23:36.810 "io_unit_size": 131072, 00:23:36.810 "max_aq_depth": 128, 00:23:36.810 "num_shared_buffers": 511, 00:23:36.810 "buf_cache_size": 4294967295, 00:23:36.810 "dif_insert_or_strip": false, 00:23:36.810 "zcopy": false, 00:23:36.810 "c2h_success": false, 00:23:36.810 "sock_priority": 0, 00:23:36.810 "abort_timeout_sec": 1, 00:23:36.810 "ack_timeout": 0, 00:23:36.810 "data_wr_pool_size": 0 00:23:36.810 } 00:23:36.810 }, 00:23:36.810 { 00:23:36.810 "method": "nvmf_create_subsystem", 00:23:36.810 "params": { 00:23:36.810 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.810 "allow_any_host": false, 00:23:36.810 "serial_number": "SPDK00000000000001", 00:23:36.810 "model_number": "SPDK bdev Controller", 00:23:36.810 "max_namespaces": 10, 00:23:36.810 "min_cntlid": 1, 00:23:36.810 "max_cntlid": 65519, 00:23:36.810 "ana_reporting": false 00:23:36.810 } 00:23:36.810 }, 00:23:36.810 { 00:23:36.810 "method": "nvmf_subsystem_add_host", 00:23:36.810 "params": { 00:23:36.810 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.810 "host": "nqn.2016-06.io.spdk:host1", 00:23:36.810 "psk": "/tmp/tmp.CMbDqMk05Q" 00:23:36.810 } 00:23:36.810 }, 00:23:36.810 { 00:23:36.810 "method": "nvmf_subsystem_add_ns", 00:23:36.810 "params": { 00:23:36.810 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.810 "namespace": { 00:23:36.810 "nsid": 1, 00:23:36.810 "bdev_name": "malloc0", 00:23:36.810 "nguid": "1D4268CE1838442EB2DAF11AE3C61C90", 00:23:36.810 "uuid": "1d4268ce-1838-442e-b2da-f11ae3c61c90", 00:23:36.810 "no_auto_visible": false 00:23:36.810 } 00:23:36.810 } 00:23:36.810 }, 00:23:36.810 { 00:23:36.810 "method": "nvmf_subsystem_add_listener", 00:23:36.810 "params": { 00:23:36.810 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.810 "listen_address": { 00:23:36.810 "trtype": "TCP", 00:23:36.810 "adrfam": "IPv4", 00:23:36.810 "traddr": "10.0.0.2", 00:23:36.810 "trsvcid": "4420" 00:23:36.810 }, 00:23:36.810 "secure_channel": true 00:23:36.810 } 00:23:36.810 } 00:23:36.810 ] 00:23:36.810 } 00:23:36.810 ] 00:23:36.810 }' 00:23:36.810 22:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:37.069 22:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:37.069 "subsystems": [ 00:23:37.069 { 00:23:37.069 "subsystem": "keyring", 00:23:37.069 "config": [] 00:23:37.069 }, 00:23:37.069 { 00:23:37.069 "subsystem": "iobuf", 00:23:37.069 "config": [ 00:23:37.069 { 00:23:37.069 "method": "iobuf_set_options", 00:23:37.069 "params": { 00:23:37.069 "small_pool_count": 8192, 00:23:37.069 "large_pool_count": 1024, 00:23:37.069 "small_bufsize": 8192, 00:23:37.069 "large_bufsize": 135168 00:23:37.069 } 00:23:37.069 } 00:23:37.069 ] 00:23:37.069 }, 00:23:37.069 { 00:23:37.069 "subsystem": "sock", 00:23:37.069 "config": [ 00:23:37.069 { 00:23:37.069 "method": "sock_set_default_impl", 00:23:37.069 "params": { 00:23:37.069 "impl_name": "posix" 00:23:37.069 } 00:23:37.069 }, 00:23:37.069 { 00:23:37.069 "method": "sock_impl_set_options", 00:23:37.069 "params": { 00:23:37.069 "impl_name": "ssl", 00:23:37.069 "recv_buf_size": 4096, 00:23:37.069 "send_buf_size": 4096, 00:23:37.069 "enable_recv_pipe": true, 00:23:37.069 "enable_quickack": false, 00:23:37.069 "enable_placement_id": 0, 00:23:37.069 "enable_zerocopy_send_server": true, 00:23:37.069 "enable_zerocopy_send_client": false, 00:23:37.069 "zerocopy_threshold": 0, 00:23:37.069 "tls_version": 0, 00:23:37.069 "enable_ktls": false 00:23:37.069 } 00:23:37.069 }, 00:23:37.069 { 00:23:37.069 "method": "sock_impl_set_options", 00:23:37.069 "params": { 00:23:37.069 "impl_name": "posix", 00:23:37.069 "recv_buf_size": 2097152, 00:23:37.069 "send_buf_size": 2097152, 00:23:37.069 "enable_recv_pipe": true, 00:23:37.069 "enable_quickack": false, 00:23:37.069 "enable_placement_id": 0, 00:23:37.069 "enable_zerocopy_send_server": true, 00:23:37.069 "enable_zerocopy_send_client": false, 00:23:37.069 "zerocopy_threshold": 0, 00:23:37.069 "tls_version": 0, 00:23:37.069 "enable_ktls": false 00:23:37.069 } 00:23:37.069 } 00:23:37.069 ] 00:23:37.069 }, 00:23:37.069 { 00:23:37.069 "subsystem": "vmd", 00:23:37.069 "config": [] 00:23:37.069 }, 00:23:37.069 { 00:23:37.069 "subsystem": "accel", 00:23:37.069 "config": [ 00:23:37.069 { 00:23:37.069 "method": "accel_set_options", 00:23:37.069 "params": { 00:23:37.069 "small_cache_size": 128, 00:23:37.069 "large_cache_size": 16, 00:23:37.069 "task_count": 2048, 00:23:37.069 "sequence_count": 2048, 00:23:37.069 "buf_count": 2048 00:23:37.069 } 00:23:37.069 } 00:23:37.069 ] 00:23:37.069 }, 00:23:37.069 { 00:23:37.069 "subsystem": "bdev", 00:23:37.069 "config": [ 00:23:37.069 { 00:23:37.069 "method": "bdev_set_options", 00:23:37.069 "params": { 00:23:37.069 "bdev_io_pool_size": 65535, 00:23:37.069 "bdev_io_cache_size": 256, 00:23:37.069 "bdev_auto_examine": true, 00:23:37.069 "iobuf_small_cache_size": 128, 00:23:37.069 "iobuf_large_cache_size": 16 00:23:37.069 } 00:23:37.069 }, 00:23:37.069 { 00:23:37.069 "method": "bdev_raid_set_options", 00:23:37.069 "params": { 00:23:37.069 "process_window_size_kb": 1024 00:23:37.069 } 00:23:37.069 }, 00:23:37.069 { 00:23:37.069 "method": "bdev_iscsi_set_options", 00:23:37.069 "params": { 00:23:37.069 "timeout_sec": 30 00:23:37.069 } 00:23:37.069 }, 00:23:37.069 { 00:23:37.069 "method": "bdev_nvme_set_options", 00:23:37.069 "params": { 00:23:37.069 "action_on_timeout": "none", 00:23:37.069 "timeout_us": 0, 00:23:37.069 "timeout_admin_us": 0, 00:23:37.069 "keep_alive_timeout_ms": 10000, 00:23:37.069 "arbitration_burst": 0, 00:23:37.069 "low_priority_weight": 0, 00:23:37.069 "medium_priority_weight": 0, 00:23:37.069 "high_priority_weight": 0, 00:23:37.069 "nvme_adminq_poll_period_us": 10000, 00:23:37.069 "nvme_ioq_poll_period_us": 0, 00:23:37.069 "io_queue_requests": 512, 00:23:37.069 "delay_cmd_submit": true, 00:23:37.069 "transport_retry_count": 4, 00:23:37.069 "bdev_retry_count": 3, 00:23:37.069 "transport_ack_timeout": 0, 00:23:37.069 "ctrlr_loss_timeout_sec": 0, 00:23:37.070 "reconnect_delay_sec": 0, 00:23:37.070 "fast_io_fail_timeout_sec": 0, 00:23:37.070 "disable_auto_failback": false, 00:23:37.070 "generate_uuids": false, 00:23:37.070 "transport_tos": 0, 00:23:37.070 "nvme_error_stat": false, 00:23:37.070 "rdma_srq_size": 0, 00:23:37.070 "io_path_stat": false, 00:23:37.070 "allow_accel_sequence": false, 00:23:37.070 "rdma_max_cq_size": 0, 00:23:37.070 "rdma_cm_event_timeout_ms": 0, 00:23:37.070 "dhchap_digests": [ 00:23:37.070 "sha256", 00:23:37.070 "sha384", 00:23:37.070 "sha512" 00:23:37.070 ], 00:23:37.070 "dhchap_dhgroups": [ 00:23:37.070 "null", 00:23:37.070 "ffdhe2048", 00:23:37.070 "ffdhe3072", 00:23:37.070 "ffdhe4096", 00:23:37.070 "ffdhe6144", 00:23:37.070 "ffdhe8192" 00:23:37.070 ] 00:23:37.070 } 00:23:37.070 }, 00:23:37.070 { 00:23:37.070 "method": "bdev_nvme_attach_controller", 00:23:37.070 "params": { 00:23:37.070 "name": "TLSTEST", 00:23:37.070 "trtype": "TCP", 00:23:37.070 "adrfam": "IPv4", 00:23:37.070 "traddr": "10.0.0.2", 00:23:37.070 "trsvcid": "4420", 00:23:37.070 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.070 "prchk_reftag": false, 00:23:37.070 "prchk_guard": false, 00:23:37.070 "ctrlr_loss_timeout_sec": 0, 00:23:37.070 "reconnect_delay_sec": 0, 00:23:37.070 "fast_io_fail_timeout_sec": 0, 00:23:37.070 "psk": "/tmp/tmp.CMbDqMk05Q", 00:23:37.070 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:37.070 "hdgst": false, 00:23:37.070 "ddgst": false 00:23:37.070 } 00:23:37.070 }, 00:23:37.070 { 00:23:37.070 "method": "bdev_nvme_set_hotplug", 00:23:37.070 "params": { 00:23:37.070 "period_us": 100000, 00:23:37.070 "enable": false 00:23:37.070 } 00:23:37.070 }, 00:23:37.070 { 00:23:37.070 "method": "bdev_wait_for_examine" 00:23:37.070 } 00:23:37.070 ] 00:23:37.070 }, 00:23:37.070 { 00:23:37.070 "subsystem": "nbd", 00:23:37.070 "config": [] 00:23:37.070 } 00:23:37.070 ] 00:23:37.070 }' 00:23:37.070 22:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 3583463 00:23:37.070 22:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3583463 ']' 00:23:37.070 22:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3583463 00:23:37.070 22:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:37.070 22:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:37.070 22:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3583463 00:23:37.070 22:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:37.070 22:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:37.070 22:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3583463' 00:23:37.070 killing process with pid 3583463 00:23:37.070 22:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3583463 00:23:37.070 Received shutdown signal, test time was about 10.000000 seconds 00:23:37.070 00:23:37.070 Latency(us) 00:23:37.070 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.070 =================================================================================================================== 00:23:37.070 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:37.070 [2024-07-26 22:53:29.510035] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:37.070 22:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3583463 00:23:37.328 22:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 3583179 00:23:37.328 22:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3583179 ']' 00:23:37.328 22:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3583179 00:23:37.328 22:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:37.328 22:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:37.328 22:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3583179 00:23:37.328 22:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:37.328 22:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:37.328 22:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3583179' 00:23:37.328 killing process with pid 3583179 00:23:37.328 22:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3583179 00:23:37.328 [2024-07-26 22:53:29.763593] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:37.328 22:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3583179 00:23:37.587 22:53:30 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:37.587 22:53:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:37.587 22:53:30 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:23:37.587 "subsystems": [ 00:23:37.587 { 00:23:37.587 "subsystem": "keyring", 00:23:37.587 "config": [] 00:23:37.587 }, 00:23:37.587 { 00:23:37.587 "subsystem": "iobuf", 00:23:37.587 "config": [ 00:23:37.587 { 00:23:37.587 "method": "iobuf_set_options", 00:23:37.587 "params": { 00:23:37.587 "small_pool_count": 8192, 00:23:37.587 "large_pool_count": 1024, 00:23:37.587 "small_bufsize": 8192, 00:23:37.587 "large_bufsize": 135168 00:23:37.587 } 00:23:37.587 } 00:23:37.587 ] 00:23:37.587 }, 00:23:37.587 { 00:23:37.587 "subsystem": "sock", 00:23:37.587 "config": [ 00:23:37.587 { 00:23:37.587 "method": "sock_set_default_impl", 00:23:37.587 "params": { 00:23:37.587 "impl_name": "posix" 00:23:37.587 } 00:23:37.587 }, 00:23:37.587 { 00:23:37.587 "method": "sock_impl_set_options", 00:23:37.587 "params": { 00:23:37.587 "impl_name": "ssl", 00:23:37.587 "recv_buf_size": 4096, 00:23:37.587 "send_buf_size": 4096, 00:23:37.587 "enable_recv_pipe": true, 00:23:37.587 "enable_quickack": false, 00:23:37.587 "enable_placement_id": 0, 00:23:37.587 "enable_zerocopy_send_server": true, 00:23:37.587 "enable_zerocopy_send_client": false, 00:23:37.587 "zerocopy_threshold": 0, 00:23:37.587 "tls_version": 0, 00:23:37.587 "enable_ktls": false 00:23:37.587 } 00:23:37.587 }, 00:23:37.587 { 00:23:37.587 "method": "sock_impl_set_options", 00:23:37.587 "params": { 00:23:37.587 "impl_name": "posix", 00:23:37.587 "recv_buf_size": 2097152, 00:23:37.587 "send_buf_size": 2097152, 00:23:37.587 "enable_recv_pipe": true, 00:23:37.587 "enable_quickack": false, 00:23:37.587 "enable_placement_id": 0, 00:23:37.587 "enable_zerocopy_send_server": true, 00:23:37.587 "enable_zerocopy_send_client": false, 00:23:37.587 "zerocopy_threshold": 0, 00:23:37.587 "tls_version": 0, 00:23:37.587 "enable_ktls": false 00:23:37.587 } 00:23:37.587 } 00:23:37.587 ] 00:23:37.587 }, 00:23:37.587 { 00:23:37.587 "subsystem": "vmd", 00:23:37.587 "config": [] 00:23:37.587 }, 00:23:37.587 { 00:23:37.587 "subsystem": "accel", 00:23:37.587 "config": [ 00:23:37.587 { 00:23:37.587 "method": "accel_set_options", 00:23:37.587 "params": { 00:23:37.587 "small_cache_size": 128, 00:23:37.587 "large_cache_size": 16, 00:23:37.587 "task_count": 2048, 00:23:37.587 "sequence_count": 2048, 00:23:37.587 "buf_count": 2048 00:23:37.587 } 00:23:37.587 } 00:23:37.587 ] 00:23:37.587 }, 00:23:37.587 { 00:23:37.587 "subsystem": "bdev", 00:23:37.587 "config": [ 00:23:37.587 { 00:23:37.587 "method": "bdev_set_options", 00:23:37.587 "params": { 00:23:37.587 "bdev_io_pool_size": 65535, 00:23:37.587 "bdev_io_cache_size": 256, 00:23:37.587 "bdev_auto_examine": true, 00:23:37.587 "iobuf_small_cache_size": 128, 00:23:37.587 "iobuf_large_cache_size": 16 00:23:37.587 } 00:23:37.587 }, 00:23:37.587 { 00:23:37.587 "method": "bdev_raid_set_options", 00:23:37.587 "params": { 00:23:37.587 "process_window_size_kb": 1024 00:23:37.587 } 00:23:37.587 }, 00:23:37.587 { 00:23:37.587 "method": "bdev_iscsi_set_options", 00:23:37.587 "params": { 00:23:37.587 "timeout_sec": 30 00:23:37.587 } 00:23:37.587 }, 00:23:37.587 { 00:23:37.587 "method": "bdev_nvme_set_options", 00:23:37.587 "params": { 00:23:37.587 "action_on_timeout": "none", 00:23:37.587 "timeout_us": 0, 00:23:37.587 "timeout_admin_us": 0, 00:23:37.587 "keep_alive_timeout_ms": 10000, 00:23:37.587 "arbitration_burst": 0, 00:23:37.587 "low_priority_weight": 0, 00:23:37.587 "medium_priority_weight": 0, 00:23:37.587 "high_priority_weight": 0, 00:23:37.587 "nvme_adminq_poll_period_us": 10000, 00:23:37.587 "nvme_ioq_poll_period_us": 0, 00:23:37.587 "io_queue_requests": 0, 00:23:37.587 "delay_cmd_submit": true, 00:23:37.587 "transport_retry_count": 4, 00:23:37.587 "bdev_retry_count": 3, 00:23:37.587 "transport_ack_timeout": 0, 00:23:37.587 "ctrlr_loss_timeout_sec": 0, 00:23:37.588 "reconnect_delay_sec": 0, 00:23:37.588 "fast_io_fail_timeout_sec": 0, 00:23:37.588 "disable_auto_failback": false, 00:23:37.588 "generate_uuids": false, 00:23:37.588 "transport_tos": 0, 00:23:37.588 "nvme_error_stat": false, 00:23:37.588 "rdma_srq_size": 0, 00:23:37.588 "io_path_stat": false, 00:23:37.588 "allow_accel_sequence": false, 00:23:37.588 "rdma_max_cq_size": 0, 00:23:37.588 "rdma_cm_event_timeout_ms": 0, 00:23:37.588 "dhchap_digests": [ 00:23:37.588 "sha256", 00:23:37.588 "sha384", 00:23:37.588 "sha512" 00:23:37.588 ], 00:23:37.588 "dhchap_dhgroups": [ 00:23:37.588 "null", 00:23:37.588 "ffdhe2048", 00:23:37.588 "ffdhe3072", 00:23:37.588 "ffdhe4096", 00:23:37.588 "ffdhe 22:53:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:37.588 6144", 00:23:37.588 "ffdhe8192" 00:23:37.588 ] 00:23:37.588 } 00:23:37.588 }, 00:23:37.588 { 00:23:37.588 "method": "bdev_nvme_set_hotplug", 00:23:37.588 "params": { 00:23:37.588 "period_us": 100000, 00:23:37.588 "enable": false 00:23:37.588 } 00:23:37.588 }, 00:23:37.588 { 00:23:37.588 "method": "bdev_malloc_create", 00:23:37.588 "params": { 00:23:37.588 "name": "malloc0", 00:23:37.588 "num_blocks": 8192, 00:23:37.588 "block_size": 4096, 00:23:37.588 "physical_block_size": 4096, 00:23:37.588 "uuid": "1d4268ce-1838-442e-b2da-f11ae3c61c90", 00:23:37.588 "optimal_io_boundary": 0 00:23:37.588 } 00:23:37.588 }, 00:23:37.588 { 00:23:37.588 "method": "bdev_wait_for_examine" 00:23:37.588 } 00:23:37.588 ] 00:23:37.588 }, 00:23:37.588 { 00:23:37.588 "subsystem": "nbd", 00:23:37.588 "config": [] 00:23:37.588 }, 00:23:37.588 { 00:23:37.588 "subsystem": "scheduler", 00:23:37.588 "config": [ 00:23:37.588 { 00:23:37.588 "method": "framework_set_scheduler", 00:23:37.588 "params": { 00:23:37.588 "name": "static" 00:23:37.588 } 00:23:37.588 } 00:23:37.588 ] 00:23:37.588 }, 00:23:37.588 { 00:23:37.588 "subsystem": "nvmf", 00:23:37.588 "config": [ 00:23:37.588 { 00:23:37.588 "method": "nvmf_set_config", 00:23:37.588 "params": { 00:23:37.588 "discovery_filter": "match_any", 00:23:37.588 "admin_cmd_passthru": { 00:23:37.588 "identify_ctrlr": false 00:23:37.588 } 00:23:37.588 } 00:23:37.588 }, 00:23:37.588 { 00:23:37.588 "method": "nvmf_set_max_subsystems", 00:23:37.588 "params": { 00:23:37.588 "max_subsystems": 1024 00:23:37.588 } 00:23:37.588 }, 00:23:37.588 { 00:23:37.588 "method": "nvmf_set_crdt", 00:23:37.588 "params": { 00:23:37.588 "crdt1": 0, 00:23:37.588 "crdt2": 0, 00:23:37.588 "crdt3": 0 00:23:37.588 } 00:23:37.588 }, 00:23:37.588 { 00:23:37.588 "method": "nvmf_create_transport", 00:23:37.588 "params": { 00:23:37.588 "trtype": "TCP", 00:23:37.588 "max_queue_depth": 128, 00:23:37.588 "max_io_qpairs_per_ctrlr": 127, 00:23:37.588 "in_capsule_data_size": 4096, 00:23:37.588 "max_io_size": 131072, 00:23:37.588 "io_unit_size": 131072, 00:23:37.588 "max_aq_depth": 128, 00:23:37.588 "num_shared_buffers": 511, 00:23:37.588 "buf_cache_size": 4294967295, 00:23:37.588 "dif_insert_or_strip": false, 00:23:37.588 "zcopy": false, 00:23:37.588 "c2h_success": false, 00:23:37.588 "sock_priority": 0, 00:23:37.588 "abort_timeout_sec": 1, 00:23:37.588 "ack_timeout": 0, 00:23:37.588 "data_wr_pool_size": 0 00:23:37.588 } 00:23:37.588 }, 00:23:37.588 { 00:23:37.588 "method": "nvmf_create_subsystem", 00:23:37.588 "params": { 00:23:37.588 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.588 "allow_any_host": false, 00:23:37.588 "serial_number": "SPDK00000000000001", 00:23:37.588 "model_number": "SPDK bdev Controller", 00:23:37.588 "max_namespaces": 10, 00:23:37.588 "min_cntlid": 1, 00:23:37.588 "max_cntlid": 65519, 00:23:37.588 "ana_reporting": false 00:23:37.588 } 00:23:37.588 }, 00:23:37.588 { 00:23:37.588 "method": "nvmf_subsystem_add_host", 00:23:37.588 "params": { 00:23:37.588 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.588 "host": "nqn.2016-06.io.spdk:host1", 00:23:37.588 "psk": "/tmp/tmp.CMbDqMk05Q" 00:23:37.588 } 00:23:37.588 }, 00:23:37.588 { 00:23:37.588 "method": "nvmf_subsystem_add_ns", 00:23:37.588 "params": { 00:23:37.588 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.588 "namespace": { 00:23:37.588 "nsid": 1, 00:23:37.588 "bdev_name": "malloc0", 00:23:37.588 "nguid": "1D4268CE1838442EB2DAF11AE3C61C90", 00:23:37.588 "uuid": "1d4268ce-1838-442e-b2da-f11ae3c61c90", 00:23:37.588 "no_auto_visible": false 00:23:37.588 } 00:23:37.588 } 00:23:37.588 }, 00:23:37.588 { 00:23:37.588 "method": "nvmf_subsystem_add_listener", 00:23:37.588 "params": { 00:23:37.588 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.588 "listen_address": { 00:23:37.588 "trtype": "TCP", 00:23:37.588 "adrfam": "IPv4", 00:23:37.588 "traddr": "10.0.0.2", 00:23:37.588 "trsvcid": "4420" 00:23:37.588 }, 00:23:37.588 "secure_channel": true 00:23:37.588 } 00:23:37.588 } 00:23:37.588 ] 00:23:37.588 } 00:23:37.588 ] 00:23:37.588 }' 00:23:37.588 22:53:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.588 22:53:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3583623 00:23:37.588 22:53:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:37.588 22:53:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3583623 00:23:37.588 22:53:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3583623 ']' 00:23:37.588 22:53:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.588 22:53:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:37.588 22:53:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.588 22:53:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:37.588 22:53:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.588 [2024-07-26 22:53:30.059614] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:37.588 [2024-07-26 22:53:30.059736] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:37.847 EAL: No free 2048 kB hugepages reported on node 1 00:23:37.847 [2024-07-26 22:53:30.130467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.847 [2024-07-26 22:53:30.222456] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:37.847 [2024-07-26 22:53:30.222511] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:37.847 [2024-07-26 22:53:30.222539] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:37.847 [2024-07-26 22:53:30.222551] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:37.847 [2024-07-26 22:53:30.222561] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:37.847 [2024-07-26 22:53:30.222660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:38.105 [2024-07-26 22:53:30.454736] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:38.105 [2024-07-26 22:53:30.470705] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:38.105 [2024-07-26 22:53:30.486756] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:38.105 [2024-07-26 22:53:30.494221] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:38.671 22:53:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:38.671 22:53:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:38.671 22:53:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:38.671 22:53:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:38.671 22:53:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.671 22:53:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:38.671 22:53:31 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3583773 00:23:38.671 22:53:31 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3583773 /var/tmp/bdevperf.sock 00:23:38.671 22:53:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3583773 ']' 00:23:38.671 22:53:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:38.671 22:53:31 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:38.671 22:53:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:38.671 22:53:31 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:38.671 "subsystems": [ 00:23:38.671 { 00:23:38.671 "subsystem": "keyring", 00:23:38.671 "config": [] 00:23:38.671 }, 00:23:38.671 { 00:23:38.671 "subsystem": "iobuf", 00:23:38.671 "config": [ 00:23:38.671 { 00:23:38.671 "method": "iobuf_set_options", 00:23:38.671 "params": { 00:23:38.671 "small_pool_count": 8192, 00:23:38.671 "large_pool_count": 1024, 00:23:38.671 "small_bufsize": 8192, 00:23:38.671 "large_bufsize": 135168 00:23:38.671 } 00:23:38.671 } 00:23:38.671 ] 00:23:38.671 }, 00:23:38.671 { 00:23:38.671 "subsystem": "sock", 00:23:38.671 "config": [ 00:23:38.671 { 00:23:38.671 "method": "sock_set_default_impl", 00:23:38.671 "params": { 00:23:38.671 "impl_name": "posix" 00:23:38.671 } 00:23:38.671 }, 00:23:38.671 { 00:23:38.671 "method": "sock_impl_set_options", 00:23:38.671 "params": { 00:23:38.671 "impl_name": "ssl", 00:23:38.671 "recv_buf_size": 4096, 00:23:38.671 "send_buf_size": 4096, 00:23:38.671 "enable_recv_pipe": true, 00:23:38.671 "enable_quickack": false, 00:23:38.671 "enable_placement_id": 0, 00:23:38.671 "enable_zerocopy_send_server": true, 00:23:38.671 "enable_zerocopy_send_client": false, 00:23:38.671 "zerocopy_threshold": 0, 00:23:38.671 "tls_version": 0, 00:23:38.671 "enable_ktls": false 00:23:38.671 } 00:23:38.671 }, 00:23:38.671 { 00:23:38.671 "method": "sock_impl_set_options", 00:23:38.671 "params": { 00:23:38.671 "impl_name": "posix", 00:23:38.671 "recv_buf_size": 2097152, 00:23:38.671 "send_buf_size": 2097152, 00:23:38.671 "enable_recv_pipe": true, 00:23:38.671 "enable_quickack": false, 00:23:38.671 "enable_placement_id": 0, 00:23:38.671 "enable_zerocopy_send_server": true, 00:23:38.671 "enable_zerocopy_send_client": false, 00:23:38.671 "zerocopy_threshold": 0, 00:23:38.671 "tls_version": 0, 00:23:38.671 "enable_ktls": false 00:23:38.671 } 00:23:38.671 } 00:23:38.671 ] 00:23:38.671 }, 00:23:38.671 { 00:23:38.671 "subsystem": "vmd", 00:23:38.671 "config": [] 00:23:38.671 }, 00:23:38.671 { 00:23:38.671 "subsystem": "accel", 00:23:38.671 "config": [ 00:23:38.671 { 00:23:38.671 "method": "accel_set_options", 00:23:38.671 "params": { 00:23:38.671 "small_cache_size": 128, 00:23:38.671 "large_cache_size": 16, 00:23:38.671 "task_count": 2048, 00:23:38.671 "sequence_count": 2048, 00:23:38.671 "buf_count": 2048 00:23:38.671 } 00:23:38.671 } 00:23:38.671 ] 00:23:38.671 }, 00:23:38.671 { 00:23:38.671 "subsystem": "bdev", 00:23:38.671 "config": [ 00:23:38.671 { 00:23:38.671 "method": "bdev_set_options", 00:23:38.671 "params": { 00:23:38.671 "bdev_io_pool_size": 65535, 00:23:38.671 "bdev_io_cache_size": 256, 00:23:38.671 "bdev_auto_examine": true, 00:23:38.671 "iobuf_small_cache_size": 128, 00:23:38.671 "iobuf_large_cache_size": 16 00:23:38.671 } 00:23:38.671 }, 00:23:38.671 { 00:23:38.671 "method": "bdev_raid_set_options", 00:23:38.671 "params": { 00:23:38.671 "process_window_size_kb": 1024 00:23:38.671 } 00:23:38.671 }, 00:23:38.671 { 00:23:38.671 "method": "bdev_iscsi_set_options", 00:23:38.671 "params": { 00:23:38.671 "timeout_sec": 30 00:23:38.671 } 00:23:38.671 }, 00:23:38.671 { 00:23:38.671 "method": "bdev_nvme_set_options", 00:23:38.671 "params": { 00:23:38.671 "action_on_timeout": "none", 00:23:38.671 "timeout_us": 0, 00:23:38.671 "timeout_admin_us": 0, 00:23:38.671 "keep_alive_timeout_ms": 10000, 00:23:38.671 "arbitration_burst": 0, 00:23:38.671 "low_priority_weight": 0, 00:23:38.671 "medium_priority_weight": 0, 00:23:38.671 "high_priority_weight": 0, 00:23:38.671 "nvme_adminq_poll_period_us": 10000, 00:23:38.671 "nvme_ioq_poll_period_us": 0, 00:23:38.671 "io_queue_requests": 512, 00:23:38.671 "delay_cmd_submit": true, 00:23:38.671 "transport_retry_count": 4, 00:23:38.671 "bdev_retry_count": 3, 00:23:38.671 "transport_ack_timeout": 0, 00:23:38.671 "ctrlr_loss_timeout_sec": 0, 00:23:38.671 "reconnect_delay_sec": 0, 00:23:38.671 "fast_io_fail_timeout_sec": 0, 00:23:38.671 "disable_auto_failback": false, 00:23:38.671 "generate_uuids": false, 00:23:38.671 "transport_tos": 0, 00:23:38.671 "nvme_error_stat": false, 00:23:38.671 "rdma_srq_size": 0, 00:23:38.671 "io_path_stat": false, 00:23:38.671 "allow_accel_sequence": false, 00:23:38.671 "rdma_max_cq_size": 0, 00:23:38.671 "rdma_cm_event_timeout_ms": 0, 00:23:38.671 "dhchap_digests": [ 00:23:38.671 "sha256", 00:23:38.671 "sha384", 00:23:38.671 "sha512" 00:23:38.671 ], 00:23:38.671 "dhchap_dhgroups": [ 00:23:38.671 "null", 00:23:38.671 "ffdhe2048", 00:23:38.671 "ffdhe3072", 00:23:38.671 "ffdhe4096", 00:23:38.671 "ffdhe6144", 00:23:38.671 "ffdhe8192" 00:23:38.671 ] 00:23:38.671 } 00:23:38.671 }, 00:23:38.671 { 00:23:38.671 "method": "bdev_nvme_attach_controller", 00:23:38.671 "params": { 00:23:38.671 "name": "TLSTEST", 00:23:38.672 "trtype": "TCP", 00:23:38.672 "adrfam": "IPv4", 00:23:38.672 "traddr": "10.0.0.2", 00:23:38.672 "trsvcid": "4420", 00:23:38.672 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.672 "prchk_reftag": false, 00:23:38.672 "prchk_guard": false, 00:23:38.672 "ctrlr_loss_timeout_sec": 0, 00:23:38.672 "reconnect_delay_sec": 0, 00:23:38.672 "fast_io_fail_timeout_sec": 0, 00:23:38.672 "psk": "/tmp/tmp.CMbDqMk05Q", 00:23:38.672 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:38.672 "hdgst": false, 00:23:38.672 "ddgst": false 00:23:38.672 } 00:23:38.672 }, 00:23:38.672 { 00:23:38.672 "method": "bdev_nvme_set_hotplug", 00:23:38.672 "params": { 00:23:38.672 "period_us": 100000, 00:23:38.672 "enable": false 00:23:38.672 } 00:23:38.672 }, 00:23:38.672 { 00:23:38.672 "method": "bdev_wait_for_examine" 00:23:38.672 } 00:23:38.672 ] 00:23:38.672 }, 00:23:38.672 { 00:23:38.672 "subsystem": "nbd", 00:23:38.672 "config": [] 00:23:38.672 } 00:23:38.672 ] 00:23:38.672 }' 00:23:38.672 22:53:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:38.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:38.672 22:53:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:38.672 22:53:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.672 [2024-07-26 22:53:31.141918] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:38.672 [2024-07-26 22:53:31.141993] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3583773 ] 00:23:38.672 EAL: No free 2048 kB hugepages reported on node 1 00:23:38.930 [2024-07-26 22:53:31.200971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.930 [2024-07-26 22:53:31.287134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.188 [2024-07-26 22:53:31.449475] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:39.188 [2024-07-26 22:53:31.449617] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:39.753 22:53:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:39.753 22:53:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:39.754 22:53:32 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:39.754 Running I/O for 10 seconds... 00:23:51.975 00:23:51.975 Latency(us) 00:23:51.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.975 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:51.975 Verification LBA range: start 0x0 length 0x2000 00:23:51.975 TLSTESTn1 : 10.05 2170.59 8.48 0.00 0.00 58807.62 6043.88 93595.12 00:23:51.975 =================================================================================================================== 00:23:51.975 Total : 2170.59 8.48 0.00 0.00 58807.62 6043.88 93595.12 00:23:51.975 0 00:23:51.975 22:53:42 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:51.975 22:53:42 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 3583773 00:23:51.975 22:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3583773 ']' 00:23:51.975 22:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3583773 00:23:51.975 22:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:51.975 22:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:51.975 22:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3583773 00:23:51.975 22:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:51.975 22:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:51.975 22:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3583773' 00:23:51.975 killing process with pid 3583773 00:23:51.975 22:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3583773 00:23:51.975 Received shutdown signal, test time was about 10.000000 seconds 00:23:51.975 00:23:51.975 Latency(us) 00:23:51.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.975 =================================================================================================================== 00:23:51.975 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:51.975 [2024-07-26 22:53:42.334601] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:51.975 22:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3583773 00:23:51.975 22:53:42 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 3583623 00:23:51.975 22:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3583623 ']' 00:23:51.975 22:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3583623 00:23:51.975 22:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:51.975 22:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:51.975 22:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3583623 00:23:51.975 22:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:51.975 22:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:51.975 22:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3583623' 00:23:51.975 killing process with pid 3583623 00:23:51.975 22:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3583623 00:23:51.975 [2024-07-26 22:53:42.590261] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:51.975 22:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3583623 00:23:51.975 22:53:42 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:51.975 22:53:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:51.975 22:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:51.975 22:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.975 22:53:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3585175 00:23:51.975 22:53:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:51.975 22:53:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3585175 00:23:51.975 22:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3585175 ']' 00:23:51.976 22:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.976 22:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:51.976 22:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.976 22:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:51.976 22:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.976 [2024-07-26 22:53:42.883914] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:51.976 [2024-07-26 22:53:42.883998] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.976 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.976 [2024-07-26 22:53:42.951278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.976 [2024-07-26 22:53:43.039648] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.976 [2024-07-26 22:53:43.039713] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.976 [2024-07-26 22:53:43.039729] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:51.976 [2024-07-26 22:53:43.039750] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:51.976 [2024-07-26 22:53:43.039762] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.976 [2024-07-26 22:53:43.039793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.976 22:53:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:51.976 22:53:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:51.976 22:53:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:51.976 22:53:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:51.976 22:53:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.976 22:53:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.976 22:53:43 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.CMbDqMk05Q 00:23:51.976 22:53:43 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.CMbDqMk05Q 00:23:51.976 22:53:43 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:51.976 [2024-07-26 22:53:43.456354] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:51.976 22:53:43 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:51.976 22:53:43 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:51.976 [2024-07-26 22:53:44.017931] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:51.976 [2024-07-26 22:53:44.018230] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:51.976 22:53:44 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:51.976 malloc0 00:23:51.976 22:53:44 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:52.234 22:53:44 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.CMbDqMk05Q 00:23:52.491 [2024-07-26 22:53:44.864746] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:52.491 22:53:44 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3585381 00:23:52.491 22:53:44 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:52.491 22:53:44 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:52.491 22:53:44 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3585381 /var/tmp/bdevperf.sock 00:23:52.491 22:53:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3585381 ']' 00:23:52.491 22:53:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:52.491 22:53:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:52.491 22:53:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:52.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:52.491 22:53:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:52.491 22:53:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.491 [2024-07-26 22:53:44.929241] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:52.491 [2024-07-26 22:53:44.929310] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3585381 ] 00:23:52.491 EAL: No free 2048 kB hugepages reported on node 1 00:23:52.491 [2024-07-26 22:53:44.986433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.749 [2024-07-26 22:53:45.075323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.749 22:53:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:52.749 22:53:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:52.749 22:53:45 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CMbDqMk05Q 00:23:53.007 22:53:45 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:53.265 [2024-07-26 22:53:45.700509] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:53.523 nvme0n1 00:23:53.523 22:53:45 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:53.523 Running I/O for 1 seconds... 00:23:54.896 00:23:54.896 Latency(us) 00:23:54.896 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.896 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:54.896 Verification LBA range: start 0x0 length 0x2000 00:23:54.896 nvme0n1 : 1.05 1964.38 7.67 0.00 0.00 63701.89 6213.78 108741.21 00:23:54.896 =================================================================================================================== 00:23:54.896 Total : 1964.38 7.67 0.00 0.00 63701.89 6213.78 108741.21 00:23:54.896 0 00:23:54.896 22:53:46 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 3585381 00:23:54.896 22:53:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3585381 ']' 00:23:54.897 22:53:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3585381 00:23:54.897 22:53:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:54.897 22:53:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:54.897 22:53:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3585381 00:23:54.897 22:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:54.897 22:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:54.897 22:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3585381' 00:23:54.897 killing process with pid 3585381 00:23:54.897 22:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3585381 00:23:54.897 Received shutdown signal, test time was about 1.000000 seconds 00:23:54.897 00:23:54.897 Latency(us) 00:23:54.897 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.897 =================================================================================================================== 00:23:54.897 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:54.897 22:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3585381 00:23:54.897 22:53:47 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 3585175 00:23:54.897 22:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3585175 ']' 00:23:54.897 22:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3585175 00:23:54.897 22:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:54.897 22:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:54.897 22:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3585175 00:23:54.897 22:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:54.897 22:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:54.897 22:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3585175' 00:23:54.897 killing process with pid 3585175 00:23:54.897 22:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3585175 00:23:54.897 [2024-07-26 22:53:47.265483] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:54.897 22:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3585175 00:23:55.155 22:53:47 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:23:55.155 22:53:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:55.155 22:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:55.155 22:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.156 22:53:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3585785 00:23:55.156 22:53:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:55.156 22:53:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3585785 00:23:55.156 22:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3585785 ']' 00:23:55.156 22:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.156 22:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:55.156 22:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.156 22:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:55.156 22:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.156 [2024-07-26 22:53:47.545341] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:55.156 [2024-07-26 22:53:47.545440] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:55.156 EAL: No free 2048 kB hugepages reported on node 1 00:23:55.156 [2024-07-26 22:53:47.606994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.414 [2024-07-26 22:53:47.694470] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:55.414 [2024-07-26 22:53:47.694535] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:55.414 [2024-07-26 22:53:47.694552] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:55.414 [2024-07-26 22:53:47.694565] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:55.414 [2024-07-26 22:53:47.694576] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:55.414 [2024-07-26 22:53:47.694608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.414 22:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:55.414 22:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:55.414 22:53:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:55.414 22:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:55.414 22:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.414 22:53:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:55.414 22:53:47 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:23:55.414 22:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.414 22:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.414 [2024-07-26 22:53:47.845164] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:55.414 malloc0 00:23:55.414 [2024-07-26 22:53:47.877970] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:55.414 [2024-07-26 22:53:47.878286] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.414 22:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.414 22:53:47 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=3585808 00:23:55.414 22:53:47 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:55.414 22:53:47 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 3585808 /var/tmp/bdevperf.sock 00:23:55.414 22:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3585808 ']' 00:23:55.414 22:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:55.414 22:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:55.414 22:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:55.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:55.414 22:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:55.414 22:53:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.677 [2024-07-26 22:53:47.947640] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:55.677 [2024-07-26 22:53:47.947717] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3585808 ] 00:23:55.677 EAL: No free 2048 kB hugepages reported on node 1 00:23:55.677 [2024-07-26 22:53:48.008247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.677 [2024-07-26 22:53:48.099211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:55.934 22:53:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:55.934 22:53:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:55.934 22:53:48 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CMbDqMk05Q 00:23:56.192 22:53:48 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:56.450 [2024-07-26 22:53:48.740602] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:56.450 nvme0n1 00:23:56.450 22:53:48 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:56.450 Running I/O for 1 seconds... 00:23:57.822 00:23:57.822 Latency(us) 00:23:57.822 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.822 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:57.822 Verification LBA range: start 0x0 length 0x2000 00:23:57.822 nvme0n1 : 1.06 2085.27 8.15 0.00 0.00 60019.94 8252.68 101750.71 00:23:57.822 =================================================================================================================== 00:23:57.822 Total : 2085.27 8.15 0.00 0.00 60019.94 8252.68 101750.71 00:23:57.822 0 00:23:57.822 22:53:50 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:23:57.822 22:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.822 22:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.822 22:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.822 22:53:50 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:23:57.822 "subsystems": [ 00:23:57.822 { 00:23:57.822 "subsystem": "keyring", 00:23:57.822 "config": [ 00:23:57.822 { 00:23:57.822 "method": "keyring_file_add_key", 00:23:57.822 "params": { 00:23:57.822 "name": "key0", 00:23:57.822 "path": "/tmp/tmp.CMbDqMk05Q" 00:23:57.822 } 00:23:57.822 } 00:23:57.822 ] 00:23:57.822 }, 00:23:57.822 { 00:23:57.822 "subsystem": "iobuf", 00:23:57.822 "config": [ 00:23:57.822 { 00:23:57.822 "method": "iobuf_set_options", 00:23:57.822 "params": { 00:23:57.822 "small_pool_count": 8192, 00:23:57.822 "large_pool_count": 1024, 00:23:57.822 "small_bufsize": 8192, 00:23:57.822 "large_bufsize": 135168 00:23:57.822 } 00:23:57.822 } 00:23:57.822 ] 00:23:57.822 }, 00:23:57.822 { 00:23:57.822 "subsystem": "sock", 00:23:57.822 "config": [ 00:23:57.822 { 00:23:57.822 "method": "sock_set_default_impl", 00:23:57.822 "params": { 00:23:57.822 "impl_name": "posix" 00:23:57.822 } 00:23:57.822 }, 00:23:57.822 { 00:23:57.822 "method": "sock_impl_set_options", 00:23:57.822 "params": { 00:23:57.822 "impl_name": "ssl", 00:23:57.822 "recv_buf_size": 4096, 00:23:57.822 "send_buf_size": 4096, 00:23:57.822 "enable_recv_pipe": true, 00:23:57.822 "enable_quickack": false, 00:23:57.822 "enable_placement_id": 0, 00:23:57.822 "enable_zerocopy_send_server": true, 00:23:57.822 "enable_zerocopy_send_client": false, 00:23:57.822 "zerocopy_threshold": 0, 00:23:57.822 "tls_version": 0, 00:23:57.822 "enable_ktls": false 00:23:57.822 } 00:23:57.822 }, 00:23:57.822 { 00:23:57.822 "method": "sock_impl_set_options", 00:23:57.822 "params": { 00:23:57.822 "impl_name": "posix", 00:23:57.822 "recv_buf_size": 2097152, 00:23:57.822 "send_buf_size": 2097152, 00:23:57.822 "enable_recv_pipe": true, 00:23:57.822 "enable_quickack": false, 00:23:57.822 "enable_placement_id": 0, 00:23:57.822 "enable_zerocopy_send_server": true, 00:23:57.822 "enable_zerocopy_send_client": false, 00:23:57.822 "zerocopy_threshold": 0, 00:23:57.822 "tls_version": 0, 00:23:57.822 "enable_ktls": false 00:23:57.822 } 00:23:57.822 } 00:23:57.822 ] 00:23:57.822 }, 00:23:57.822 { 00:23:57.822 "subsystem": "vmd", 00:23:57.822 "config": [] 00:23:57.822 }, 00:23:57.822 { 00:23:57.822 "subsystem": "accel", 00:23:57.822 "config": [ 00:23:57.822 { 00:23:57.822 "method": "accel_set_options", 00:23:57.822 "params": { 00:23:57.822 "small_cache_size": 128, 00:23:57.822 "large_cache_size": 16, 00:23:57.822 "task_count": 2048, 00:23:57.822 "sequence_count": 2048, 00:23:57.822 "buf_count": 2048 00:23:57.822 } 00:23:57.822 } 00:23:57.822 ] 00:23:57.822 }, 00:23:57.822 { 00:23:57.822 "subsystem": "bdev", 00:23:57.822 "config": [ 00:23:57.822 { 00:23:57.822 "method": "bdev_set_options", 00:23:57.822 "params": { 00:23:57.822 "bdev_io_pool_size": 65535, 00:23:57.822 "bdev_io_cache_size": 256, 00:23:57.822 "bdev_auto_examine": true, 00:23:57.822 "iobuf_small_cache_size": 128, 00:23:57.822 "iobuf_large_cache_size": 16 00:23:57.822 } 00:23:57.822 }, 00:23:57.822 { 00:23:57.822 "method": "bdev_raid_set_options", 00:23:57.822 "params": { 00:23:57.822 "process_window_size_kb": 1024 00:23:57.822 } 00:23:57.822 }, 00:23:57.822 { 00:23:57.822 "method": "bdev_iscsi_set_options", 00:23:57.822 "params": { 00:23:57.822 "timeout_sec": 30 00:23:57.822 } 00:23:57.822 }, 00:23:57.822 { 00:23:57.822 "method": "bdev_nvme_set_options", 00:23:57.822 "params": { 00:23:57.822 "action_on_timeout": "none", 00:23:57.822 "timeout_us": 0, 00:23:57.822 "timeout_admin_us": 0, 00:23:57.822 "keep_alive_timeout_ms": 10000, 00:23:57.822 "arbitration_burst": 0, 00:23:57.822 "low_priority_weight": 0, 00:23:57.822 "medium_priority_weight": 0, 00:23:57.822 "high_priority_weight": 0, 00:23:57.822 "nvme_adminq_poll_period_us": 10000, 00:23:57.822 "nvme_ioq_poll_period_us": 0, 00:23:57.822 "io_queue_requests": 0, 00:23:57.822 "delay_cmd_submit": true, 00:23:57.822 "transport_retry_count": 4, 00:23:57.822 "bdev_retry_count": 3, 00:23:57.822 "transport_ack_timeout": 0, 00:23:57.822 "ctrlr_loss_timeout_sec": 0, 00:23:57.822 "reconnect_delay_sec": 0, 00:23:57.822 "fast_io_fail_timeout_sec": 0, 00:23:57.822 "disable_auto_failback": false, 00:23:57.822 "generate_uuids": false, 00:23:57.822 "transport_tos": 0, 00:23:57.822 "nvme_error_stat": false, 00:23:57.822 "rdma_srq_size": 0, 00:23:57.822 "io_path_stat": false, 00:23:57.822 "allow_accel_sequence": false, 00:23:57.822 "rdma_max_cq_size": 0, 00:23:57.822 "rdma_cm_event_timeout_ms": 0, 00:23:57.822 "dhchap_digests": [ 00:23:57.822 "sha256", 00:23:57.822 "sha384", 00:23:57.822 "sha512" 00:23:57.822 ], 00:23:57.822 "dhchap_dhgroups": [ 00:23:57.822 "null", 00:23:57.822 "ffdhe2048", 00:23:57.822 "ffdhe3072", 00:23:57.822 "ffdhe4096", 00:23:57.822 "ffdhe6144", 00:23:57.822 "ffdhe8192" 00:23:57.822 ] 00:23:57.822 } 00:23:57.822 }, 00:23:57.822 { 00:23:57.822 "method": "bdev_nvme_set_hotplug", 00:23:57.822 "params": { 00:23:57.822 "period_us": 100000, 00:23:57.822 "enable": false 00:23:57.822 } 00:23:57.822 }, 00:23:57.822 { 00:23:57.822 "method": "bdev_malloc_create", 00:23:57.822 "params": { 00:23:57.822 "name": "malloc0", 00:23:57.822 "num_blocks": 8192, 00:23:57.822 "block_size": 4096, 00:23:57.822 "physical_block_size": 4096, 00:23:57.822 "uuid": "49f756f4-ce77-4e90-af8e-31f7b812a086", 00:23:57.822 "optimal_io_boundary": 0 00:23:57.822 } 00:23:57.822 }, 00:23:57.822 { 00:23:57.822 "method": "bdev_wait_for_examine" 00:23:57.822 } 00:23:57.822 ] 00:23:57.822 }, 00:23:57.822 { 00:23:57.822 "subsystem": "nbd", 00:23:57.822 "config": [] 00:23:57.822 }, 00:23:57.822 { 00:23:57.822 "subsystem": "scheduler", 00:23:57.822 "config": [ 00:23:57.822 { 00:23:57.822 "method": "framework_set_scheduler", 00:23:57.822 "params": { 00:23:57.822 "name": "static" 00:23:57.822 } 00:23:57.822 } 00:23:57.822 ] 00:23:57.822 }, 00:23:57.822 { 00:23:57.822 "subsystem": "nvmf", 00:23:57.822 "config": [ 00:23:57.822 { 00:23:57.822 "method": "nvmf_set_config", 00:23:57.822 "params": { 00:23:57.823 "discovery_filter": "match_any", 00:23:57.823 "admin_cmd_passthru": { 00:23:57.823 "identify_ctrlr": false 00:23:57.823 } 00:23:57.823 } 00:23:57.823 }, 00:23:57.823 { 00:23:57.823 "method": "nvmf_set_max_subsystems", 00:23:57.823 "params": { 00:23:57.823 "max_subsystems": 1024 00:23:57.823 } 00:23:57.823 }, 00:23:57.823 { 00:23:57.823 "method": "nvmf_set_crdt", 00:23:57.823 "params": { 00:23:57.823 "crdt1": 0, 00:23:57.823 "crdt2": 0, 00:23:57.823 "crdt3": 0 00:23:57.823 } 00:23:57.823 }, 00:23:57.823 { 00:23:57.823 "method": "nvmf_create_transport", 00:23:57.823 "params": { 00:23:57.823 "trtype": "TCP", 00:23:57.823 "max_queue_depth": 128, 00:23:57.823 "max_io_qpairs_per_ctrlr": 127, 00:23:57.823 "in_capsule_data_size": 4096, 00:23:57.823 "max_io_size": 131072, 00:23:57.823 "io_unit_size": 131072, 00:23:57.823 "max_aq_depth": 128, 00:23:57.823 "num_shared_buffers": 511, 00:23:57.823 "buf_cache_size": 4294967295, 00:23:57.823 "dif_insert_or_strip": false, 00:23:57.823 "zcopy": false, 00:23:57.823 "c2h_success": false, 00:23:57.823 "sock_priority": 0, 00:23:57.823 "abort_timeout_sec": 1, 00:23:57.823 "ack_timeout": 0, 00:23:57.823 "data_wr_pool_size": 0 00:23:57.823 } 00:23:57.823 }, 00:23:57.823 { 00:23:57.823 "method": "nvmf_create_subsystem", 00:23:57.823 "params": { 00:23:57.823 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.823 "allow_any_host": false, 00:23:57.823 "serial_number": "00000000000000000000", 00:23:57.823 "model_number": "SPDK bdev Controller", 00:23:57.823 "max_namespaces": 32, 00:23:57.823 "min_cntlid": 1, 00:23:57.823 "max_cntlid": 65519, 00:23:57.823 "ana_reporting": false 00:23:57.823 } 00:23:57.823 }, 00:23:57.823 { 00:23:57.823 "method": "nvmf_subsystem_add_host", 00:23:57.823 "params": { 00:23:57.823 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.823 "host": "nqn.2016-06.io.spdk:host1", 00:23:57.823 "psk": "key0" 00:23:57.823 } 00:23:57.823 }, 00:23:57.823 { 00:23:57.823 "method": "nvmf_subsystem_add_ns", 00:23:57.823 "params": { 00:23:57.823 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.823 "namespace": { 00:23:57.823 "nsid": 1, 00:23:57.823 "bdev_name": "malloc0", 00:23:57.823 "nguid": "49F756F4CE774E90AF8E31F7B812A086", 00:23:57.823 "uuid": "49f756f4-ce77-4e90-af8e-31f7b812a086", 00:23:57.823 "no_auto_visible": false 00:23:57.823 } 00:23:57.823 } 00:23:57.823 }, 00:23:57.823 { 00:23:57.823 "method": "nvmf_subsystem_add_listener", 00:23:57.823 "params": { 00:23:57.823 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.823 "listen_address": { 00:23:57.823 "trtype": "TCP", 00:23:57.823 "adrfam": "IPv4", 00:23:57.823 "traddr": "10.0.0.2", 00:23:57.823 "trsvcid": "4420" 00:23:57.823 }, 00:23:57.823 "secure_channel": true 00:23:57.823 } 00:23:57.823 } 00:23:57.823 ] 00:23:57.823 } 00:23:57.823 ] 00:23:57.823 }' 00:23:57.823 22:53:50 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:58.081 22:53:50 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:23:58.081 "subsystems": [ 00:23:58.081 { 00:23:58.081 "subsystem": "keyring", 00:23:58.081 "config": [ 00:23:58.081 { 00:23:58.081 "method": "keyring_file_add_key", 00:23:58.081 "params": { 00:23:58.081 "name": "key0", 00:23:58.081 "path": "/tmp/tmp.CMbDqMk05Q" 00:23:58.081 } 00:23:58.081 } 00:23:58.081 ] 00:23:58.081 }, 00:23:58.081 { 00:23:58.081 "subsystem": "iobuf", 00:23:58.081 "config": [ 00:23:58.081 { 00:23:58.081 "method": "iobuf_set_options", 00:23:58.081 "params": { 00:23:58.081 "small_pool_count": 8192, 00:23:58.081 "large_pool_count": 1024, 00:23:58.081 "small_bufsize": 8192, 00:23:58.081 "large_bufsize": 135168 00:23:58.081 } 00:23:58.081 } 00:23:58.081 ] 00:23:58.081 }, 00:23:58.081 { 00:23:58.081 "subsystem": "sock", 00:23:58.081 "config": [ 00:23:58.081 { 00:23:58.081 "method": "sock_set_default_impl", 00:23:58.081 "params": { 00:23:58.081 "impl_name": "posix" 00:23:58.081 } 00:23:58.081 }, 00:23:58.081 { 00:23:58.081 "method": "sock_impl_set_options", 00:23:58.081 "params": { 00:23:58.081 "impl_name": "ssl", 00:23:58.081 "recv_buf_size": 4096, 00:23:58.081 "send_buf_size": 4096, 00:23:58.081 "enable_recv_pipe": true, 00:23:58.081 "enable_quickack": false, 00:23:58.081 "enable_placement_id": 0, 00:23:58.081 "enable_zerocopy_send_server": true, 00:23:58.081 "enable_zerocopy_send_client": false, 00:23:58.081 "zerocopy_threshold": 0, 00:23:58.081 "tls_version": 0, 00:23:58.081 "enable_ktls": false 00:23:58.081 } 00:23:58.081 }, 00:23:58.081 { 00:23:58.081 "method": "sock_impl_set_options", 00:23:58.081 "params": { 00:23:58.081 "impl_name": "posix", 00:23:58.081 "recv_buf_size": 2097152, 00:23:58.081 "send_buf_size": 2097152, 00:23:58.081 "enable_recv_pipe": true, 00:23:58.081 "enable_quickack": false, 00:23:58.081 "enable_placement_id": 0, 00:23:58.081 "enable_zerocopy_send_server": true, 00:23:58.081 "enable_zerocopy_send_client": false, 00:23:58.081 "zerocopy_threshold": 0, 00:23:58.081 "tls_version": 0, 00:23:58.081 "enable_ktls": false 00:23:58.081 } 00:23:58.081 } 00:23:58.081 ] 00:23:58.081 }, 00:23:58.081 { 00:23:58.081 "subsystem": "vmd", 00:23:58.081 "config": [] 00:23:58.081 }, 00:23:58.081 { 00:23:58.081 "subsystem": "accel", 00:23:58.081 "config": [ 00:23:58.081 { 00:23:58.081 "method": "accel_set_options", 00:23:58.081 "params": { 00:23:58.081 "small_cache_size": 128, 00:23:58.081 "large_cache_size": 16, 00:23:58.081 "task_count": 2048, 00:23:58.081 "sequence_count": 2048, 00:23:58.081 "buf_count": 2048 00:23:58.081 } 00:23:58.081 } 00:23:58.081 ] 00:23:58.081 }, 00:23:58.081 { 00:23:58.081 "subsystem": "bdev", 00:23:58.081 "config": [ 00:23:58.081 { 00:23:58.081 "method": "bdev_set_options", 00:23:58.081 "params": { 00:23:58.081 "bdev_io_pool_size": 65535, 00:23:58.081 "bdev_io_cache_size": 256, 00:23:58.081 "bdev_auto_examine": true, 00:23:58.081 "iobuf_small_cache_size": 128, 00:23:58.081 "iobuf_large_cache_size": 16 00:23:58.081 } 00:23:58.081 }, 00:23:58.081 { 00:23:58.081 "method": "bdev_raid_set_options", 00:23:58.081 "params": { 00:23:58.081 "process_window_size_kb": 1024 00:23:58.081 } 00:23:58.081 }, 00:23:58.081 { 00:23:58.081 "method": "bdev_iscsi_set_options", 00:23:58.081 "params": { 00:23:58.081 "timeout_sec": 30 00:23:58.081 } 00:23:58.081 }, 00:23:58.081 { 00:23:58.081 "method": "bdev_nvme_set_options", 00:23:58.081 "params": { 00:23:58.081 "action_on_timeout": "none", 00:23:58.081 "timeout_us": 0, 00:23:58.081 "timeout_admin_us": 0, 00:23:58.081 "keep_alive_timeout_ms": 10000, 00:23:58.081 "arbitration_burst": 0, 00:23:58.081 "low_priority_weight": 0, 00:23:58.081 "medium_priority_weight": 0, 00:23:58.081 "high_priority_weight": 0, 00:23:58.081 "nvme_adminq_poll_period_us": 10000, 00:23:58.081 "nvme_ioq_poll_period_us": 0, 00:23:58.081 "io_queue_requests": 512, 00:23:58.081 "delay_cmd_submit": true, 00:23:58.081 "transport_retry_count": 4, 00:23:58.081 "bdev_retry_count": 3, 00:23:58.081 "transport_ack_timeout": 0, 00:23:58.081 "ctrlr_loss_timeout_sec": 0, 00:23:58.081 "reconnect_delay_sec": 0, 00:23:58.081 "fast_io_fail_timeout_sec": 0, 00:23:58.081 "disable_auto_failback": false, 00:23:58.081 "generate_uuids": false, 00:23:58.081 "transport_tos": 0, 00:23:58.081 "nvme_error_stat": false, 00:23:58.081 "rdma_srq_size": 0, 00:23:58.081 "io_path_stat": false, 00:23:58.081 "allow_accel_sequence": false, 00:23:58.081 "rdma_max_cq_size": 0, 00:23:58.081 "rdma_cm_event_timeout_ms": 0, 00:23:58.081 "dhchap_digests": [ 00:23:58.081 "sha256", 00:23:58.081 "sha384", 00:23:58.081 "sha512" 00:23:58.081 ], 00:23:58.081 "dhchap_dhgroups": [ 00:23:58.081 "null", 00:23:58.081 "ffdhe2048", 00:23:58.081 "ffdhe3072", 00:23:58.081 "ffdhe4096", 00:23:58.081 "ffdhe6144", 00:23:58.081 "ffdhe8192" 00:23:58.081 ] 00:23:58.082 } 00:23:58.082 }, 00:23:58.082 { 00:23:58.082 "method": "bdev_nvme_attach_controller", 00:23:58.082 "params": { 00:23:58.082 "name": "nvme0", 00:23:58.082 "trtype": "TCP", 00:23:58.082 "adrfam": "IPv4", 00:23:58.082 "traddr": "10.0.0.2", 00:23:58.082 "trsvcid": "4420", 00:23:58.082 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.082 "prchk_reftag": false, 00:23:58.082 "prchk_guard": false, 00:23:58.082 "ctrlr_loss_timeout_sec": 0, 00:23:58.082 "reconnect_delay_sec": 0, 00:23:58.082 "fast_io_fail_timeout_sec": 0, 00:23:58.082 "psk": "key0", 00:23:58.082 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:58.082 "hdgst": false, 00:23:58.082 "ddgst": false 00:23:58.082 } 00:23:58.082 }, 00:23:58.082 { 00:23:58.082 "method": "bdev_nvme_set_hotplug", 00:23:58.082 "params": { 00:23:58.082 "period_us": 100000, 00:23:58.082 "enable": false 00:23:58.082 } 00:23:58.082 }, 00:23:58.082 { 00:23:58.082 "method": "bdev_enable_histogram", 00:23:58.082 "params": { 00:23:58.082 "name": "nvme0n1", 00:23:58.082 "enable": true 00:23:58.082 } 00:23:58.082 }, 00:23:58.082 { 00:23:58.082 "method": "bdev_wait_for_examine" 00:23:58.082 } 00:23:58.082 ] 00:23:58.082 }, 00:23:58.082 { 00:23:58.082 "subsystem": "nbd", 00:23:58.082 "config": [] 00:23:58.082 } 00:23:58.082 ] 00:23:58.082 }' 00:23:58.082 22:53:50 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 3585808 00:23:58.082 22:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3585808 ']' 00:23:58.082 22:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3585808 00:23:58.082 22:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:58.082 22:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:58.082 22:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3585808 00:23:58.082 22:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:58.082 22:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:58.082 22:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3585808' 00:23:58.082 killing process with pid 3585808 00:23:58.082 22:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3585808 00:23:58.082 Received shutdown signal, test time was about 1.000000 seconds 00:23:58.082 00:23:58.082 Latency(us) 00:23:58.082 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:58.082 =================================================================================================================== 00:23:58.082 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:58.082 22:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3585808 00:23:58.340 22:53:50 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 3585785 00:23:58.340 22:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3585785 ']' 00:23:58.340 22:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3585785 00:23:58.340 22:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:58.340 22:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:58.340 22:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3585785 00:23:58.340 22:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:58.340 22:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:58.340 22:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3585785' 00:23:58.340 killing process with pid 3585785 00:23:58.340 22:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3585785 00:23:58.340 22:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3585785 00:23:58.598 22:53:50 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:23:58.598 22:53:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:58.598 22:53:50 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:23:58.598 "subsystems": [ 00:23:58.598 { 00:23:58.598 "subsystem": "keyring", 00:23:58.598 "config": [ 00:23:58.598 { 00:23:58.598 "method": "keyring_file_add_key", 00:23:58.598 "params": { 00:23:58.598 "name": "key0", 00:23:58.598 "path": "/tmp/tmp.CMbDqMk05Q" 00:23:58.598 } 00:23:58.598 } 00:23:58.598 ] 00:23:58.598 }, 00:23:58.598 { 00:23:58.598 "subsystem": "iobuf", 00:23:58.598 "config": [ 00:23:58.598 { 00:23:58.598 "method": "iobuf_set_options", 00:23:58.598 "params": { 00:23:58.598 "small_pool_count": 8192, 00:23:58.598 "large_pool_count": 1024, 00:23:58.598 "small_bufsize": 8192, 00:23:58.598 "large_bufsize": 135168 00:23:58.598 } 00:23:58.598 } 00:23:58.598 ] 00:23:58.598 }, 00:23:58.598 { 00:23:58.598 "subsystem": "sock", 00:23:58.598 "config": [ 00:23:58.598 { 00:23:58.598 "method": "sock_set_default_impl", 00:23:58.598 "params": { 00:23:58.598 "impl_name": "posix" 00:23:58.598 } 00:23:58.598 }, 00:23:58.598 { 00:23:58.598 "method": "sock_impl_set_options", 00:23:58.598 "params": { 00:23:58.598 "impl_name": "ssl", 00:23:58.598 "recv_buf_size": 4096, 00:23:58.599 "send_buf_size": 4096, 00:23:58.599 "enable_recv_pipe": true, 00:23:58.599 "enable_quickack": false, 00:23:58.599 "enable_placement_id": 0, 00:23:58.599 "enable_zerocopy_send_server": true, 00:23:58.599 "enable_zerocopy_send_client": false, 00:23:58.599 "zerocopy_threshold": 0, 00:23:58.599 "tls_version": 0, 00:23:58.599 "enable_ktls": false 00:23:58.599 } 00:23:58.599 }, 00:23:58.599 { 00:23:58.599 "method": "sock_impl_set_options", 00:23:58.599 "params": { 00:23:58.599 "impl_name": "posix", 00:23:58.599 "recv_buf_size": 2097152, 00:23:58.599 "send_buf_size": 2097152, 00:23:58.599 "enable_recv_pipe": true, 00:23:58.599 "enable_quickack": false, 00:23:58.599 "enable_placement_id": 0, 00:23:58.599 "enable_zerocopy_send_server": true, 00:23:58.599 "enable_zerocopy_send_client": false, 00:23:58.599 "zerocopy_threshold": 0, 00:23:58.599 "tls_version": 0, 00:23:58.599 "enable_ktls": false 00:23:58.599 } 00:23:58.599 } 00:23:58.599 ] 00:23:58.599 }, 00:23:58.599 { 00:23:58.599 "subsystem": "vmd", 00:23:58.599 "config": [] 00:23:58.599 }, 00:23:58.599 { 00:23:58.599 "subsystem": "accel", 00:23:58.599 "config": [ 00:23:58.599 { 00:23:58.599 "method": "accel_set_options", 00:23:58.599 "params": { 00:23:58.599 "small_cache_size": 128, 00:23:58.599 "large_cache_size": 16, 00:23:58.599 "task_count": 2048, 00:23:58.599 "sequence_count": 2048, 00:23:58.599 "buf_count": 2048 00:23:58.599 } 00:23:58.599 } 00:23:58.599 ] 00:23:58.599 }, 00:23:58.599 { 00:23:58.599 "subsystem": "bdev", 00:23:58.599 "config": [ 00:23:58.599 { 00:23:58.599 "method": "bdev_set_options", 00:23:58.599 "params": { 00:23:58.599 "bdev_io_pool_size": 65535, 00:23:58.599 "bdev_io_cache_size": 256, 00:23:58.599 "bdev_auto_examine": true, 00:23:58.599 "iobuf_small_cache_size": 128, 00:23:58.599 "iobuf_large_cache_size": 16 00:23:58.599 } 00:23:58.599 }, 00:23:58.599 { 00:23:58.599 "method": "bdev_raid_set_options", 00:23:58.599 "params": { 00:23:58.599 "process_window_size_kb": 1024 00:23:58.599 } 00:23:58.599 }, 00:23:58.599 { 00:23:58.599 "method": "bdev_iscsi_set_options", 00:23:58.599 "params": { 00:23:58.599 "timeout_sec": 30 00:23:58.599 } 00:23:58.599 }, 00:23:58.599 { 00:23:58.599 "method": "bdev_nvme_set_options", 00:23:58.599 "params": { 00:23:58.599 "action_on_timeout": "none", 00:23:58.599 "timeout_us": 0, 00:23:58.599 "timeout_admin_us": 0, 00:23:58.599 "keep_alive_timeout_ms": 10000, 00:23:58.599 "arbitration_burst": 0, 00:23:58.599 "low_priority_weight": 0, 00:23:58.599 "medium_priority_weight": 0, 00:23:58.599 "high_priority_weight": 0, 00:23:58.599 "nvme_adminq_poll_period_us": 10000, 00:23:58.599 "nvme_ioq_poll_period_us": 0, 00:23:58.599 "io_queue_requests": 0, 00:23:58.599 "delay_cmd_submit": true, 00:23:58.599 "transport_retry_count": 4, 00:23:58.599 "bdev_retry_count": 3, 00:23:58.599 "transport_ack_timeout": 0, 00:23:58.599 "ctrlr_loss_timeout_sec": 0, 00:23:58.599 "reconnect_delay_sec": 0, 00:23:58.599 "fast_io_fail_timeout_sec": 0, 00:23:58.599 "disable_auto_failback": false, 00:23:58.599 "generate_uuids": false, 00:23:58.599 "transport_tos": 0, 00:23:58.599 "nvme_error_stat": false, 00:23:58.599 "rdma_srq_size": 0, 00:23:58.599 "io_path_stat": false, 00:23:58.599 "allow_accel_sequence": false, 00:23:58.599 "rdma_max_cq_size": 0, 00:23:58.599 "rdma_cm_event_timeout_ms": 0, 00:23:58.599 "dhchap_digests": [ 00:23:58.599 "sha256", 00:23:58.599 "sha384", 00:23:58.599 "sha512" 00:23:58.599 ], 00:23:58.599 "dhchap_dhgroups": [ 00:23:58.599 "null", 00:23:58.599 "ffdhe2048", 00:23:58.599 "ffdhe3072", 00:23:58.599 "ffdhe4096", 00:23:58.599 "ffdhe6144", 00:23:58.599 "ffdhe8192" 00:23:58.599 ] 00:23:58.599 } 00:23:58.599 }, 00:23:58.599 { 00:23:58.599 "method": "bdev_nvme_set_hotplug", 00:23:58.599 "params": { 00:23:58.599 "period_us": 100000, 00:23:58.599 "enable": false 00:23:58.599 } 00:23:58.599 }, 00:23:58.599 { 00:23:58.599 "method": "bdev_malloc_create", 00:23:58.599 "params": { 00:23:58.599 "name": "malloc0", 00:23:58.599 "num_blocks": 8192, 00:23:58.599 "block_size": 4096, 00:23:58.599 "physical_block_size": 4096, 00:23:58.599 "uuid": "49f756f4-ce77-4e90-af8e-31f7b812a086", 00:23:58.599 "optimal_io_boundary": 0 00:23:58.599 } 00:23:58.599 }, 00:23:58.599 { 00:23:58.599 "method": "bdev_wait_for_examine" 00:23:58.599 } 00:23:58.599 ] 00:23:58.599 }, 00:23:58.599 { 00:23:58.599 "subsystem": "nbd", 00:23:58.599 "config": [] 00:23:58.599 }, 00:23:58.599 { 00:23:58.599 "subsystem": "scheduler", 00:23:58.599 "config": [ 00:23:58.599 { 00:23:58.599 "method": "framework_set_scheduler", 00:23:58.599 "params": { 00:23:58.599 "name": "static" 00:23:58.599 } 00:23:58.599 } 00:23:58.599 ] 00:23:58.599 }, 00:23:58.599 { 00:23:58.599 "subsystem": "nvmf", 00:23:58.599 "config": [ 00:23:58.599 { 00:23:58.599 "method": "nvmf_set_config", 00:23:58.599 "params": { 00:23:58.599 "discovery_filter": "match_any", 00:23:58.599 "admin_cmd_passthru": { 00:23:58.599 "identify_ctrlr": false 00:23:58.599 } 00:23:58.599 } 00:23:58.599 }, 00:23:58.599 { 00:23:58.599 "method": "nvmf_set_max_subsystems", 00:23:58.599 "params": { 00:23:58.599 "max_subsystems": 1024 00:23:58.599 } 00:23:58.599 }, 00:23:58.599 { 00:23:58.599 "method": "nvmf_set_crdt", 00:23:58.599 "params": { 00:23:58.599 "crdt1": 0, 00:23:58.599 "crdt2": 0, 00:23:58.599 "crdt3": 0 00:23:58.599 } 00:23:58.599 }, 00:23:58.599 { 00:23:58.599 "method": "nvmf_create_transport", 00:23:58.599 "params": { 00:23:58.599 "trtype": "TCP", 00:23:58.599 "max_queue_depth": 128, 00:23:58.599 "max_io_qpairs_per_ctrlr": 127, 00:23:58.599 "in_capsule_data_size": 4096, 00:23:58.599 "max_io_size": 131072, 00:23:58.599 "io_unit_size": 131072, 00:23:58.599 "max_aq_depth": 128, 00:23:58.599 "num_shared_buffers": 511, 00:23:58.599 "buf_cache_size": 4294967295, 00:23:58.599 "dif_insert_or_strip": false, 00:23:58.599 "zcopy": false, 00:23:58.599 "c2h_success": false, 00:23:58.599 "sock_priority": 0, 00:23:58.599 "abort_timeout_sec": 1, 00:23:58.599 "ack_timeout": 0, 00:23:58.599 "data_wr_pool_size": 0 00:23:58.599 } 00:23:58.599 }, 00:23:58.599 { 00:23:58.599 "method": "nvmf_create_subsystem", 00:23:58.599 "params": { 00:23:58.599 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.599 "allow_any_host": false, 00:23:58.599 "serial_number": "00000000000000000000", 00:23:58.599 "model_number": "SPDK bdev Controller", 00:23:58.599 "max_namespaces": 32, 00:23:58.599 "min_cntlid": 1, 00:23:58.599 "max_cntlid": 65519, 00:23:58.599 "ana_reporting": false 00:23:58.599 } 00:23:58.599 }, 00:23:58.599 { 00:23:58.599 "method": "nvmf_subsystem_add_host", 00:23:58.599 "params": { 00:23:58.599 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.599 "host": "nqn.2016-06.io.spdk:host1", 00:23:58.599 "psk": "key0" 00:23:58.599 } 00:23:58.599 }, 00:23:58.599 { 00:23:58.599 "method": "nvmf_subsystem_add_ns", 00:23:58.599 "params": { 00:23:58.599 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.599 "namespace": { 00:23:58.599 "nsid": 1, 00:23:58.599 "bdev_name": "malloc0", 00:23:58.599 "nguid": "49F756F4CE774E90AF8E31F7B812A086", 00:23:58.599 "uuid": "49f756f4-ce77-4e90-af8e-31f7b812a086", 00:23:58.599 "no_auto_visible": false 00:23:58.599 } 00:23:58.599 } 00:23:58.599 }, 00:23:58.599 { 00:23:58.599 "method": "nvmf_subsystem_add_listener", 00:23:58.599 "params": { 00:23:58.599 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.599 "listen_address": { 00:23:58.599 "trtype": "TCP", 00:23:58.599 "adrfam": "IPv4", 00:23:58.599 "traddr": "10.0.0.2", 00:23:58.599 "trsvcid": "4420" 00:23:58.599 }, 00:23:58.599 "secure_channel": true 00:23:58.599 } 00:23:58.599 } 00:23:58.599 ] 00:23:58.599 } 00:23:58.599 ] 00:23:58.600 }' 00:23:58.600 22:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:58.600 22:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.600 22:53:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3586217 00:23:58.600 22:53:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:58.600 22:53:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3586217 00:23:58.600 22:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3586217 ']' 00:23:58.600 22:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.600 22:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:58.600 22:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.600 22:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:58.600 22:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.600 [2024-07-26 22:53:50.983563] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:58.600 [2024-07-26 22:53:50.983639] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:58.600 EAL: No free 2048 kB hugepages reported on node 1 00:23:58.600 [2024-07-26 22:53:51.049910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.858 [2024-07-26 22:53:51.144215] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:58.858 [2024-07-26 22:53:51.144279] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:58.858 [2024-07-26 22:53:51.144307] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:58.858 [2024-07-26 22:53:51.144320] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:58.858 [2024-07-26 22:53:51.144330] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:58.858 [2024-07-26 22:53:51.144443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.115 [2024-07-26 22:53:51.388846] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:59.115 [2024-07-26 22:53:51.420851] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:59.115 [2024-07-26 22:53:51.430281] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:59.681 22:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:59.681 22:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:59.681 22:53:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:59.681 22:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:59.681 22:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.681 22:53:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:59.681 22:53:52 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=3586364 00:23:59.681 22:53:52 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 3586364 /var/tmp/bdevperf.sock 00:23:59.681 22:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3586364 ']' 00:23:59.681 22:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:59.681 22:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:59.681 22:53:52 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:59.681 22:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:59.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:59.681 22:53:52 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:23:59.681 "subsystems": [ 00:23:59.681 { 00:23:59.681 "subsystem": "keyring", 00:23:59.681 "config": [ 00:23:59.681 { 00:23:59.681 "method": "keyring_file_add_key", 00:23:59.681 "params": { 00:23:59.681 "name": "key0", 00:23:59.681 "path": "/tmp/tmp.CMbDqMk05Q" 00:23:59.681 } 00:23:59.681 } 00:23:59.681 ] 00:23:59.681 }, 00:23:59.681 { 00:23:59.681 "subsystem": "iobuf", 00:23:59.681 "config": [ 00:23:59.681 { 00:23:59.681 "method": "iobuf_set_options", 00:23:59.681 "params": { 00:23:59.681 "small_pool_count": 8192, 00:23:59.681 "large_pool_count": 1024, 00:23:59.681 "small_bufsize": 8192, 00:23:59.681 "large_bufsize": 135168 00:23:59.681 } 00:23:59.681 } 00:23:59.681 ] 00:23:59.681 }, 00:23:59.681 { 00:23:59.681 "subsystem": "sock", 00:23:59.681 "config": [ 00:23:59.681 { 00:23:59.681 "method": "sock_set_default_impl", 00:23:59.681 "params": { 00:23:59.681 "impl_name": "posix" 00:23:59.681 } 00:23:59.681 }, 00:23:59.681 { 00:23:59.681 "method": "sock_impl_set_options", 00:23:59.681 "params": { 00:23:59.681 "impl_name": "ssl", 00:23:59.681 "recv_buf_size": 4096, 00:23:59.681 "send_buf_size": 4096, 00:23:59.681 "enable_recv_pipe": true, 00:23:59.681 "enable_quickack": false, 00:23:59.681 "enable_placement_id": 0, 00:23:59.681 "enable_zerocopy_send_server": true, 00:23:59.681 "enable_zerocopy_send_client": false, 00:23:59.681 "zerocopy_threshold": 0, 00:23:59.681 "tls_version": 0, 00:23:59.681 "enable_ktls": false 00:23:59.681 } 00:23:59.681 }, 00:23:59.681 { 00:23:59.681 "method": "sock_impl_set_options", 00:23:59.681 "params": { 00:23:59.681 "impl_name": "posix", 00:23:59.681 "recv_buf_size": 2097152, 00:23:59.681 "send_buf_size": 2097152, 00:23:59.681 "enable_recv_pipe": true, 00:23:59.681 "enable_quickack": false, 00:23:59.681 "enable_placement_id": 0, 00:23:59.681 "enable_zerocopy_send_server": true, 00:23:59.681 "enable_zerocopy_send_client": false, 00:23:59.681 "zerocopy_threshold": 0, 00:23:59.681 "tls_version": 0, 00:23:59.681 "enable_ktls": false 00:23:59.681 } 00:23:59.681 } 00:23:59.681 ] 00:23:59.681 }, 00:23:59.681 { 00:23:59.681 "subsystem": "vmd", 00:23:59.681 "config": [] 00:23:59.681 }, 00:23:59.681 { 00:23:59.681 "subsystem": "accel", 00:23:59.681 "config": [ 00:23:59.681 { 00:23:59.681 "method": "accel_set_options", 00:23:59.681 "params": { 00:23:59.681 "small_cache_size": 128, 00:23:59.681 "large_cache_size": 16, 00:23:59.681 "task_count": 2048, 00:23:59.681 "sequence_count": 2048, 00:23:59.681 "buf_count": 2048 00:23:59.681 } 00:23:59.681 } 00:23:59.681 ] 00:23:59.681 }, 00:23:59.681 { 00:23:59.681 "subsystem": "bdev", 00:23:59.681 "config": [ 00:23:59.681 { 00:23:59.681 "method": "bdev_set_options", 00:23:59.681 "params": { 00:23:59.681 "bdev_io_pool_size": 65535, 00:23:59.681 "bdev_io_cache_size": 256, 00:23:59.681 "bdev_auto_examine": true, 00:23:59.681 "iobuf_small_cache_size": 128, 00:23:59.681 "iobuf_large_cache_size": 16 00:23:59.681 } 00:23:59.681 }, 00:23:59.681 { 00:23:59.681 "method": "bdev_raid_set_options", 00:23:59.681 "params": { 00:23:59.681 "process_window_size_kb": 1024 00:23:59.681 } 00:23:59.681 }, 00:23:59.681 { 00:23:59.681 "method": "bdev_iscsi_set_options", 00:23:59.681 "params": { 00:23:59.681 "timeout_sec": 30 00:23:59.681 } 00:23:59.681 }, 00:23:59.681 { 00:23:59.681 "method": "bdev_nvme_set_options", 00:23:59.681 "params": { 00:23:59.681 "action_on_timeout": "none", 00:23:59.681 "timeout_us": 0, 00:23:59.681 "timeout_admin_us": 0, 00:23:59.681 "keep_alive_timeout_ms": 10000, 00:23:59.681 "arbitration_burst": 0, 00:23:59.681 "low_priority_weight": 0, 00:23:59.681 "medium_priority_weight": 0, 00:23:59.681 "high_priority_weight": 0, 00:23:59.681 "nvme_adminq_poll_period_us": 10000, 00:23:59.681 "nvme_ioq_poll_period_us": 0, 00:23:59.681 "io_queue_requests": 512, 00:23:59.681 "delay_cmd_submit": true, 00:23:59.681 "transport_retry_count": 4, 00:23:59.681 "bdev_retry_count": 3, 00:23:59.681 "transport_ack_timeout": 0, 00:23:59.681 "ctrlr_loss_timeout_sec": 0, 00:23:59.681 "reconnect_delay_sec": 0, 00:23:59.681 "fast_io_fail_timeout_sec": 0, 00:23:59.681 "disable_auto_failback": false, 00:23:59.681 "generate_uuids": false, 00:23:59.681 "transport_tos": 0, 00:23:59.681 "nvme_error_stat": false, 00:23:59.681 "rdma_srq_size": 0, 00:23:59.681 "io_path_stat": false, 00:23:59.681 "allow_accel_sequence": false, 00:23:59.681 "rdma_max_cq_size": 0, 00:23:59.681 "rdma_cm_event_timeout_ms": 0, 00:23:59.681 "dhchap_digests": [ 00:23:59.681 "sha256", 00:23:59.681 "sha384", 00:23:59.681 "sha512" 00:23:59.681 ], 00:23:59.681 "dhchap_dhgroups": [ 00:23:59.681 "null", 00:23:59.681 "ffdhe2048", 00:23:59.681 "ffdhe3072", 00:23:59.681 "ffdhe4096", 00:23:59.681 "ffdhe6144", 00:23:59.681 "ffdhe8192" 00:23:59.681 ] 00:23:59.681 } 00:23:59.682 }, 00:23:59.682 { 00:23:59.682 "method": "bdev_nvme_attach_controller", 00:23:59.682 "params": { 00:23:59.682 "name": "nvme0", 00:23:59.682 "trtype": "TCP", 00:23:59.682 "adrfam": "IPv4", 00:23:59.682 "traddr": "10.0.0.2", 00:23:59.682 "trsvcid": "4420", 00:23:59.682 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.682 "prchk_reftag": false, 00:23:59.682 "prchk_guard": false, 00:23:59.682 "ctrlr_loss_timeout_sec": 0, 00:23:59.682 "reconnect_delay_sec": 0, 00:23:59.682 "fast_io_fail_timeout_sec": 0, 00:23:59.682 "psk": "key0", 00:23:59.682 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:59.682 "hdgst": false, 00:23:59.682 "ddgst": false 00:23:59.682 } 00:23:59.682 }, 00:23:59.682 { 00:23:59.682 "method": "bdev_nvme_set_hotplug", 00:23:59.682 "params": { 00:23:59.682 "period_us": 100000, 00:23:59.682 "enable": false 00:23:59.682 } 00:23:59.682 }, 00:23:59.682 { 00:23:59.682 "method": "bdev_enable_histogram", 00:23:59.682 "params": { 00:23:59.682 "name": "nvme0n1", 00:23:59.682 "enable": true 00:23:59.682 } 00:23:59.682 }, 00:23:59.682 { 00:23:59.682 "method": "bdev_wait_for_examine" 00:23:59.682 } 00:23:59.682 ] 00:23:59.682 }, 00:23:59.682 { 00:23:59.682 "subsystem": "nbd", 00:23:59.682 "config": [] 00:23:59.682 } 00:23:59.682 ] 00:23:59.682 }' 00:23:59.682 22:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:59.682 22:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.682 [2024-07-26 22:53:52.081686] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:59.682 [2024-07-26 22:53:52.081782] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3586364 ] 00:23:59.682 EAL: No free 2048 kB hugepages reported on node 1 00:23:59.682 [2024-07-26 22:53:52.143147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.940 [2024-07-26 22:53:52.236201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.940 [2024-07-26 22:53:52.409423] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:00.876 22:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:00.876 22:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:00.876 22:53:53 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:00.876 22:53:53 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:24:00.876 22:53:53 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.876 22:53:53 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:01.135 Running I/O for 1 seconds... 00:24:02.066 00:24:02.066 Latency(us) 00:24:02.066 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:02.066 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:02.066 Verification LBA range: start 0x0 length 0x2000 00:24:02.066 nvme0n1 : 1.07 1939.64 7.58 0.00 0.00 64365.80 11747.93 99032.18 00:24:02.066 =================================================================================================================== 00:24:02.066 Total : 1939.64 7.58 0.00 0.00 64365.80 11747.93 99032.18 00:24:02.066 0 00:24:02.066 22:53:54 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:24:02.066 22:53:54 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:24:02.066 22:53:54 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:02.066 22:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:24:02.066 22:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:24:02.066 22:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:24:02.066 22:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:02.066 22:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:24:02.066 22:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:24:02.066 22:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:24:02.066 22:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:02.066 nvmf_trace.0 00:24:02.066 22:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:24:02.066 22:53:54 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 3586364 00:24:02.066 22:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3586364 ']' 00:24:02.066 22:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3586364 00:24:02.066 22:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:02.066 22:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:02.066 22:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3586364 00:24:02.066 22:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:02.066 22:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:02.066 22:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3586364' 00:24:02.066 killing process with pid 3586364 00:24:02.066 22:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3586364 00:24:02.066 Received shutdown signal, test time was about 1.000000 seconds 00:24:02.066 00:24:02.066 Latency(us) 00:24:02.066 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:02.066 =================================================================================================================== 00:24:02.066 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:02.066 22:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3586364 00:24:02.324 22:53:54 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:02.324 22:53:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:02.324 22:53:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:24:02.324 22:53:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:02.324 22:53:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:24:02.324 22:53:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:02.324 22:53:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:02.324 rmmod nvme_tcp 00:24:02.324 rmmod nvme_fabrics 00:24:02.582 rmmod nvme_keyring 00:24:02.582 22:53:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:02.582 22:53:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:24:02.582 22:53:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:24:02.582 22:53:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3586217 ']' 00:24:02.582 22:53:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3586217 00:24:02.582 22:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3586217 ']' 00:24:02.582 22:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3586217 00:24:02.582 22:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:02.582 22:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:02.582 22:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3586217 00:24:02.582 22:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:02.582 22:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:02.582 22:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3586217' 00:24:02.582 killing process with pid 3586217 00:24:02.582 22:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3586217 00:24:02.582 22:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3586217 00:24:02.841 22:53:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:02.841 22:53:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:02.841 22:53:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:02.841 22:53:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:02.841 22:53:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:02.841 22:53:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.841 22:53:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:02.841 22:53:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.743 22:53:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:04.743 22:53:57 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.NPajcyZyKI /tmp/tmp.RmzaYDLqus /tmp/tmp.CMbDqMk05Q 00:24:04.743 00:24:04.743 real 1m19.293s 00:24:04.743 user 1m58.687s 00:24:04.743 sys 0m29.027s 00:24:04.743 22:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:04.743 22:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:04.743 ************************************ 00:24:04.743 END TEST nvmf_tls 00:24:04.743 ************************************ 00:24:04.743 22:53:57 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:04.743 22:53:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:04.743 22:53:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:04.743 22:53:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:04.743 ************************************ 00:24:04.743 START TEST nvmf_fips 00:24:04.743 ************************************ 00:24:04.743 22:53:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:05.003 * Looking for test storage... 00:24:05.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:24:05.003 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:24:05.004 Error setting digest 00:24:05.004 00E2C860CB7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:24:05.004 00E2C860CB7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:24:05.004 22:53:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:06.938 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:06.938 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:06.938 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:06.938 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:06.938 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:07.197 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:07.197 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:07.197 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:07.197 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:07.197 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:07.197 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:07.197 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:07.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:07.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:24:07.197 00:24:07.197 --- 10.0.0.2 ping statistics --- 00:24:07.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.197 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:24:07.197 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:07.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:07.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:24:07.197 00:24:07.197 --- 10.0.0.1 ping statistics --- 00:24:07.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.197 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:24:07.197 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:07.197 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:24:07.197 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:07.197 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:07.197 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:07.197 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:07.197 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:07.197 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:07.197 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:07.197 22:53:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:24:07.197 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:07.197 22:53:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:07.197 22:53:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:07.197 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3588614 00:24:07.197 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:07.197 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3588614 00:24:07.197 22:53:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 3588614 ']' 00:24:07.197 22:53:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.197 22:53:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:07.197 22:53:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:07.197 22:53:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:07.197 22:53:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:07.197 [2024-07-26 22:53:59.630513] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:24:07.197 [2024-07-26 22:53:59.630586] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:07.197 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.456 [2024-07-26 22:53:59.700959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.456 [2024-07-26 22:53:59.793645] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:07.456 [2024-07-26 22:53:59.793708] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:07.456 [2024-07-26 22:53:59.793731] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:07.456 [2024-07-26 22:53:59.793744] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:07.456 [2024-07-26 22:53:59.793756] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:07.456 [2024-07-26 22:53:59.793787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:07.456 22:53:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:07.456 22:53:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:24:07.456 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:07.456 22:53:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:07.456 22:53:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:07.456 22:53:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:07.456 22:53:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:24:07.456 22:53:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:07.456 22:53:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:07.456 22:53:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:07.456 22:53:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:07.456 22:53:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:07.456 22:53:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:07.456 22:53:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:07.715 [2024-07-26 22:54:00.169299] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:07.715 [2024-07-26 22:54:00.185264] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:07.715 [2024-07-26 22:54:00.185522] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:07.715 [2024-07-26 22:54:00.217574] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:07.974 malloc0 00:24:07.974 22:54:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:07.974 22:54:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3588784 00:24:07.974 22:54:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:07.974 22:54:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3588784 /var/tmp/bdevperf.sock 00:24:07.974 22:54:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 3588784 ']' 00:24:07.974 22:54:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:07.974 22:54:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:07.974 22:54:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:07.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:07.974 22:54:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:07.974 22:54:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:07.974 [2024-07-26 22:54:00.304891] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:24:07.974 [2024-07-26 22:54:00.304981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3588784 ] 00:24:07.974 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.974 [2024-07-26 22:54:00.363348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.974 [2024-07-26 22:54:00.446757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:08.232 22:54:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:08.232 22:54:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:24:08.232 22:54:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:08.489 [2024-07-26 22:54:00.783974] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:08.490 [2024-07-26 22:54:00.784131] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:08.490 TLSTESTn1 00:24:08.490 22:54:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:08.490 Running I/O for 10 seconds... 00:24:20.682 00:24:20.682 Latency(us) 00:24:20.682 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.682 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:20.682 Verification LBA range: start 0x0 length 0x2000 00:24:20.682 TLSTESTn1 : 10.04 1137.57 4.44 0.00 0.00 112195.76 11505.21 106411.05 00:24:20.682 =================================================================================================================== 00:24:20.682 Total : 1137.57 4.44 0.00 0.00 112195.76 11505.21 106411.05 00:24:20.682 0 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:20.682 nvmf_trace.0 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3588784 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 3588784 ']' 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 3588784 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3588784 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3588784' 00:24:20.682 killing process with pid 3588784 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 3588784 00:24:20.682 Received shutdown signal, test time was about 10.000000 seconds 00:24:20.682 00:24:20.682 Latency(us) 00:24:20.682 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.682 =================================================================================================================== 00:24:20.682 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:20.682 [2024-07-26 22:54:11.161094] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 3588784 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:20.682 rmmod nvme_tcp 00:24:20.682 rmmod nvme_fabrics 00:24:20.682 rmmod nvme_keyring 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3588614 ']' 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3588614 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 3588614 ']' 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 3588614 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3588614 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3588614' 00:24:20.682 killing process with pid 3588614 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 3588614 00:24:20.682 [2024-07-26 22:54:11.485305] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 3588614 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:20.682 22:54:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:21.617 00:24:21.617 real 0m16.556s 00:24:21.617 user 0m14.318s 00:24:21.617 sys 0m5.393s 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:21.617 ************************************ 00:24:21.617 END TEST nvmf_fips 00:24:21.617 ************************************ 00:24:21.617 22:54:13 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:24:21.617 22:54:13 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:21.617 22:54:13 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:21.617 22:54:13 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:21.617 22:54:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:21.617 ************************************ 00:24:21.617 START TEST nvmf_fuzz 00:24:21.617 ************************************ 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:21.617 * Looking for test storage... 00:24:21.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:24:21.617 22:54:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:23.520 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:23.520 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:23.520 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:23.520 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:23.520 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:23.521 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:23.521 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:23.521 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:23.521 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:23.521 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:23.521 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:23.521 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:23.521 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:23.521 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:23.521 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:23.521 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:23.521 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:23.521 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:23.521 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:23.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:23.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:24:23.521 00:24:23.521 --- 10.0.0.2 ping statistics --- 00:24:23.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.521 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:24:23.521 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:23.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:23.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:24:23.521 00:24:23.521 --- 10.0.0.1 ping statistics --- 00:24:23.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.521 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:24:23.521 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:23.521 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:24:23.521 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:23.521 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:23.521 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:23.521 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:23.521 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:23.521 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:23.521 22:54:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:23.521 22:54:15 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3592614 00:24:23.521 22:54:15 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:23.521 22:54:15 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3592614 00:24:23.521 22:54:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@827 -- # '[' -z 3592614 ']' 00:24:23.521 22:54:15 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:23.521 22:54:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.521 22:54:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:23.521 22:54:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.521 22:54:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:23.521 22:54:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:23.779 22:54:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:23.779 22:54:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@860 -- # return 0 00:24:23.779 22:54:16 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:23.779 22:54:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.779 22:54:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:23.779 22:54:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.779 22:54:16 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:23.779 22:54:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.779 22:54:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:23.779 Malloc0 00:24:23.779 22:54:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.779 22:54:16 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:23.779 22:54:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.779 22:54:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:24.038 22:54:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.038 22:54:16 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:24.038 22:54:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.038 22:54:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:24.038 22:54:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.038 22:54:16 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:24.038 22:54:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.038 22:54:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:24.038 22:54:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.038 22:54:16 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:24.038 22:54:16 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:56.101 Fuzzing completed. Shutting down the fuzz application 00:24:56.101 00:24:56.101 Dumping successful admin opcodes: 00:24:56.101 8, 9, 10, 24, 00:24:56.101 Dumping successful io opcodes: 00:24:56.101 0, 9, 00:24:56.101 NS: 0x200003aeff00 I/O qp, Total commands completed: 464953, total successful commands: 2688, random_seed: 2955691648 00:24:56.101 NS: 0x200003aeff00 admin qp, Total commands completed: 57792, total successful commands: 462, random_seed: 1154370112 00:24:56.101 22:54:46 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:56.101 Fuzzing completed. Shutting down the fuzz application 00:24:56.101 00:24:56.101 Dumping successful admin opcodes: 00:24:56.101 24, 00:24:56.101 Dumping successful io opcodes: 00:24:56.101 00:24:56.101 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 428581080 00:24:56.101 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 428715328 00:24:56.101 22:54:48 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:56.101 22:54:48 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.101 22:54:48 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:56.101 22:54:48 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.101 22:54:48 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:56.101 22:54:48 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:56.101 22:54:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:56.101 22:54:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:56.101 22:54:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:56.101 22:54:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:56.101 22:54:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:56.101 22:54:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:56.101 rmmod nvme_tcp 00:24:56.101 rmmod nvme_fabrics 00:24:56.101 rmmod nvme_keyring 00:24:56.101 22:54:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:56.101 22:54:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:56.101 22:54:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:56.101 22:54:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 3592614 ']' 00:24:56.101 22:54:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 3592614 00:24:56.101 22:54:48 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@946 -- # '[' -z 3592614 ']' 00:24:56.101 22:54:48 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@950 -- # kill -0 3592614 00:24:56.101 22:54:48 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # uname 00:24:56.101 22:54:48 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:56.101 22:54:48 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3592614 00:24:56.101 22:54:48 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:56.101 22:54:48 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:56.101 22:54:48 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3592614' 00:24:56.101 killing process with pid 3592614 00:24:56.101 22:54:48 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@965 -- # kill 3592614 00:24:56.101 22:54:48 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@970 -- # wait 3592614 00:24:56.101 22:54:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:56.101 22:54:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:56.101 22:54:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:56.101 22:54:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:56.101 22:54:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:56.101 22:54:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.101 22:54:48 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:56.101 22:54:48 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.005 22:54:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:58.005 22:54:50 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:58.005 00:24:58.005 real 0m36.677s 00:24:58.005 user 0m50.863s 00:24:58.005 sys 0m15.333s 00:24:58.271 22:54:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:58.271 22:54:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:58.271 ************************************ 00:24:58.271 END TEST nvmf_fuzz 00:24:58.271 ************************************ 00:24:58.271 22:54:50 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:58.271 22:54:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:58.271 22:54:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:58.271 22:54:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:58.271 ************************************ 00:24:58.271 START TEST nvmf_multiconnection 00:24:58.271 ************************************ 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:58.271 * Looking for test storage... 00:24:58.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:24:58.271 22:54:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:00.274 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:00.274 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:00.274 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:00.274 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:00.274 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:00.274 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:00.274 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:25:00.274 00:25:00.274 --- 10.0.0.2 ping statistics --- 00:25:00.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.275 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:25:00.275 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:00.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:00.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:25:00.275 00:25:00.275 --- 10.0.0.1 ping statistics --- 00:25:00.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.275 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:25:00.275 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:00.275 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:25:00.275 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:00.275 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:00.275 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:00.275 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:00.275 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:00.275 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:00.275 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:00.275 22:54:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:00.275 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:00.275 22:54:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:00.275 22:54:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:00.275 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=3598215 00:25:00.275 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:00.275 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 3598215 00:25:00.275 22:54:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@827 -- # '[' -z 3598215 ']' 00:25:00.275 22:54:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.275 22:54:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:00.275 22:54:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.275 22:54:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:00.275 22:54:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:00.275 [2024-07-26 22:54:52.650532] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:25:00.275 [2024-07-26 22:54:52.650626] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.275 EAL: No free 2048 kB hugepages reported on node 1 00:25:00.275 [2024-07-26 22:54:52.720320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:00.534 [2024-07-26 22:54:52.812784] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.534 [2024-07-26 22:54:52.812846] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.534 [2024-07-26 22:54:52.812871] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:00.534 [2024-07-26 22:54:52.812885] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:00.534 [2024-07-26 22:54:52.812896] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.534 [2024-07-26 22:54:52.812979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:00.534 [2024-07-26 22:54:52.813047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:00.534 [2024-07-26 22:54:52.813146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:00.534 [2024-07-26 22:54:52.813149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.534 22:54:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:00.534 22:54:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@860 -- # return 0 00:25:00.534 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:00.534 22:54:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:00.534 22:54:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:00.534 22:54:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:00.534 22:54:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:00.534 22:54:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.534 22:54:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:00.534 [2024-07-26 22:54:52.961854] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:00.534 22:54:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.534 22:54:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:00.534 22:54:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:00.534 22:54:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:00.534 22:54:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.534 22:54:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:00.534 Malloc1 00:25:00.534 22:54:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.534 22:54:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:00.534 22:54:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.534 22:54:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:00.534 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.534 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:00.534 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.534 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:00.534 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.534 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:00.534 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.534 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:00.534 [2024-07-26 22:54:53.019150] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:00.534 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.534 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:00.534 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:00.534 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.534 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:00.793 Malloc2 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:00.793 Malloc3 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:00.793 Malloc4 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:00.793 Malloc5 00:25:00.793 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:00.794 Malloc6 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:00.794 Malloc7 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.794 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:01.052 Malloc8 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:01.052 Malloc9 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:01.052 Malloc10 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:01.052 Malloc11 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.052 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:01.053 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.053 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:01.053 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.053 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:01.053 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.053 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:01.053 22:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.053 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:01.053 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:01.053 22:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:01.618 22:54:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:01.618 22:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:01.618 22:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:01.618 22:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:01.618 22:54:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:04.147 22:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:04.147 22:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:04.147 22:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK1 00:25:04.147 22:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:04.147 22:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:04.147 22:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:04.147 22:54:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:04.147 22:54:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:04.405 22:54:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:04.405 22:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:04.405 22:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:04.405 22:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:04.405 22:54:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:06.301 22:54:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:06.301 22:54:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:06.301 22:54:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK2 00:25:06.301 22:54:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:06.301 22:54:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:06.301 22:54:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:06.301 22:54:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.301 22:54:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:07.232 22:54:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:07.232 22:54:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:07.232 22:54:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:07.232 22:54:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:07.232 22:54:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:09.129 22:55:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:09.129 22:55:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:09.129 22:55:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK3 00:25:09.129 22:55:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:09.129 22:55:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:09.129 22:55:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:09.129 22:55:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.129 22:55:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:09.695 22:55:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:09.695 22:55:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:09.695 22:55:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:09.695 22:55:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:09.695 22:55:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:12.222 22:55:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:12.222 22:55:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:12.222 22:55:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK4 00:25:12.222 22:55:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:12.222 22:55:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:12.222 22:55:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:12.222 22:55:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:12.222 22:55:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:12.480 22:55:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:12.480 22:55:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:12.480 22:55:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:12.480 22:55:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:12.480 22:55:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:15.008 22:55:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:15.008 22:55:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:15.008 22:55:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK5 00:25:15.008 22:55:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:15.008 22:55:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:15.008 22:55:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:15.008 22:55:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:15.008 22:55:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:15.266 22:55:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:15.266 22:55:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:15.266 22:55:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:15.266 22:55:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:15.266 22:55:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:17.162 22:55:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:17.162 22:55:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:17.162 22:55:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK6 00:25:17.162 22:55:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:17.162 22:55:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:17.162 22:55:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:17.162 22:55:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:17.162 22:55:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:18.095 22:55:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:18.095 22:55:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:18.095 22:55:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:18.095 22:55:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:18.095 22:55:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:19.990 22:55:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:19.990 22:55:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:19.990 22:55:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK7 00:25:19.990 22:55:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:19.990 22:55:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:19.990 22:55:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:19.990 22:55:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:19.990 22:55:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:20.921 22:55:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:20.921 22:55:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:20.921 22:55:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:20.921 22:55:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:20.921 22:55:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:22.816 22:55:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:22.816 22:55:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:22.816 22:55:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK8 00:25:22.816 22:55:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:22.816 22:55:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:22.816 22:55:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:22.816 22:55:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.816 22:55:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:23.748 22:55:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:23.748 22:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:23.748 22:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:23.748 22:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:23.748 22:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:25.649 22:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:25.649 22:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:25.649 22:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK9 00:25:25.649 22:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:25.649 22:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:25.649 22:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:25.649 22:55:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:25.649 22:55:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:26.586 22:55:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:26.586 22:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:26.586 22:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:26.586 22:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:26.586 22:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:28.567 22:55:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:28.567 22:55:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:28.567 22:55:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK10 00:25:28.567 22:55:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:28.567 22:55:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:28.567 22:55:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:28.567 22:55:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:28.567 22:55:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:29.497 22:55:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:29.497 22:55:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:29.497 22:55:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:29.497 22:55:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:29.497 22:55:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:31.392 22:55:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:31.392 22:55:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:31.392 22:55:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK11 00:25:31.392 22:55:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:31.392 22:55:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:31.392 22:55:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:31.392 22:55:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:31.392 [global] 00:25:31.392 thread=1 00:25:31.392 invalidate=1 00:25:31.392 rw=read 00:25:31.392 time_based=1 00:25:31.392 runtime=10 00:25:31.392 ioengine=libaio 00:25:31.392 direct=1 00:25:31.392 bs=262144 00:25:31.392 iodepth=64 00:25:31.392 norandommap=1 00:25:31.392 numjobs=1 00:25:31.392 00:25:31.392 [job0] 00:25:31.392 filename=/dev/nvme0n1 00:25:31.392 [job1] 00:25:31.392 filename=/dev/nvme10n1 00:25:31.392 [job2] 00:25:31.392 filename=/dev/nvme1n1 00:25:31.392 [job3] 00:25:31.392 filename=/dev/nvme2n1 00:25:31.392 [job4] 00:25:31.392 filename=/dev/nvme3n1 00:25:31.392 [job5] 00:25:31.392 filename=/dev/nvme4n1 00:25:31.392 [job6] 00:25:31.392 filename=/dev/nvme5n1 00:25:31.392 [job7] 00:25:31.392 filename=/dev/nvme6n1 00:25:31.392 [job8] 00:25:31.392 filename=/dev/nvme7n1 00:25:31.392 [job9] 00:25:31.392 filename=/dev/nvme8n1 00:25:31.392 [job10] 00:25:31.392 filename=/dev/nvme9n1 00:25:31.651 Could not set queue depth (nvme0n1) 00:25:31.651 Could not set queue depth (nvme10n1) 00:25:31.651 Could not set queue depth (nvme1n1) 00:25:31.651 Could not set queue depth (nvme2n1) 00:25:31.651 Could not set queue depth (nvme3n1) 00:25:31.651 Could not set queue depth (nvme4n1) 00:25:31.651 Could not set queue depth (nvme5n1) 00:25:31.651 Could not set queue depth (nvme6n1) 00:25:31.651 Could not set queue depth (nvme7n1) 00:25:31.651 Could not set queue depth (nvme8n1) 00:25:31.651 Could not set queue depth (nvme9n1) 00:25:31.909 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:31.909 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:31.909 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:31.909 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:31.909 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:31.909 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:31.909 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:31.909 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:31.909 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:31.909 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:31.909 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:31.909 fio-3.35 00:25:31.909 Starting 11 threads 00:25:44.114 00:25:44.114 job0: (groupid=0, jobs=1): err= 0: pid=3602478: Fri Jul 26 22:55:34 2024 00:25:44.114 read: IOPS=717, BW=179MiB/s (188MB/s)(1807MiB/10072msec) 00:25:44.114 slat (usec): min=10, max=93409, avg=1106.03, stdev=3809.81 00:25:44.114 clat (usec): min=1822, max=293242, avg=88024.26, stdev=39939.92 00:25:44.114 lat (usec): min=1844, max=293273, avg=89130.29, stdev=40426.77 00:25:44.114 clat percentiles (msec): 00:25:44.114 | 1.00th=[ 9], 5.00th=[ 33], 10.00th=[ 39], 20.00th=[ 56], 00:25:44.114 | 30.00th=[ 69], 40.00th=[ 78], 50.00th=[ 83], 60.00th=[ 91], 00:25:44.114 | 70.00th=[ 105], 80.00th=[ 123], 90.00th=[ 140], 95.00th=[ 157], 00:25:44.114 | 99.00th=[ 211], 99.50th=[ 222], 99.90th=[ 228], 99.95th=[ 228], 00:25:44.114 | 99.99th=[ 292] 00:25:44.114 bw ( KiB/s): min=80896, max=336384, per=10.16%, avg=183337.50, stdev=63072.18, samples=20 00:25:44.114 iops : min= 316, max= 1314, avg=716.10, stdev=246.31, samples=20 00:25:44.114 lat (msec) : 2=0.01%, 4=0.21%, 10=1.00%, 20=1.90%, 50=14.14% 00:25:44.114 lat (msec) : 100=50.35%, 250=32.38%, 500=0.01% 00:25:44.114 cpu : usr=0.49%, sys=2.27%, ctx=1780, majf=0, minf=4097 00:25:44.114 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:44.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:44.114 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:44.114 issued rwts: total=7227,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:44.114 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:44.114 job1: (groupid=0, jobs=1): err= 0: pid=3602479: Fri Jul 26 22:55:34 2024 00:25:44.114 read: IOPS=636, BW=159MiB/s (167MB/s)(1601MiB/10070msec) 00:25:44.114 slat (usec): min=10, max=47292, avg=1168.25, stdev=3654.97 00:25:44.114 clat (msec): min=4, max=200, avg=99.38, stdev=37.72 00:25:44.114 lat (msec): min=4, max=200, avg=100.55, stdev=38.27 00:25:44.114 clat percentiles (msec): 00:25:44.114 | 1.00th=[ 15], 5.00th=[ 39], 10.00th=[ 49], 20.00th=[ 64], 00:25:44.114 | 30.00th=[ 75], 40.00th=[ 92], 50.00th=[ 103], 60.00th=[ 113], 00:25:44.114 | 70.00th=[ 126], 80.00th=[ 136], 90.00th=[ 148], 95.00th=[ 157], 00:25:44.114 | 99.00th=[ 169], 99.50th=[ 178], 99.90th=[ 188], 99.95th=[ 194], 00:25:44.114 | 99.99th=[ 201] 00:25:44.114 bw ( KiB/s): min=109056, max=264704, per=8.99%, avg=162321.20, stdev=41645.34, samples=20 00:25:44.114 iops : min= 426, max= 1034, avg=634.00, stdev=162.64, samples=20 00:25:44.114 lat (msec) : 10=0.37%, 20=1.39%, 50=9.12%, 100=36.78%, 250=52.33% 00:25:44.114 cpu : usr=0.42%, sys=1.98%, ctx=1795, majf=0, minf=4097 00:25:44.114 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:25:44.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:44.114 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:44.114 issued rwts: total=6405,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:44.114 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:44.114 job2: (groupid=0, jobs=1): err= 0: pid=3602480: Fri Jul 26 22:55:34 2024 00:25:44.114 read: IOPS=582, BW=146MiB/s (153MB/s)(1470MiB/10103msec) 00:25:44.114 slat (usec): min=10, max=95186, avg=1202.63, stdev=4277.58 00:25:44.114 clat (msec): min=4, max=308, avg=108.65, stdev=42.12 00:25:44.114 lat (msec): min=4, max=308, avg=109.86, stdev=42.70 00:25:44.114 clat percentiles (msec): 00:25:44.114 | 1.00th=[ 14], 5.00th=[ 40], 10.00th=[ 51], 20.00th=[ 72], 00:25:44.114 | 30.00th=[ 87], 40.00th=[ 97], 50.00th=[ 110], 60.00th=[ 121], 00:25:44.114 | 70.00th=[ 132], 80.00th=[ 146], 90.00th=[ 159], 95.00th=[ 182], 00:25:44.114 | 99.00th=[ 215], 99.50th=[ 218], 99.90th=[ 228], 99.95th=[ 236], 00:25:44.114 | 99.99th=[ 309] 00:25:44.114 bw ( KiB/s): min=67072, max=281037, per=8.25%, avg=148883.85, stdev=46824.65, samples=20 00:25:44.114 iops : min= 262, max= 1097, avg=581.50, stdev=182.78, samples=20 00:25:44.114 lat (msec) : 10=0.26%, 20=1.02%, 50=8.54%, 100=32.63%, 250=57.54% 00:25:44.114 lat (msec) : 500=0.02% 00:25:44.114 cpu : usr=0.29%, sys=1.81%, ctx=1510, majf=0, minf=3721 00:25:44.114 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:44.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:44.114 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:44.114 issued rwts: total=5881,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:44.114 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:44.114 job3: (groupid=0, jobs=1): err= 0: pid=3602481: Fri Jul 26 22:55:34 2024 00:25:44.114 read: IOPS=600, BW=150MiB/s (157MB/s)(1512MiB/10070msec) 00:25:44.114 slat (usec): min=13, max=98082, avg=1375.09, stdev=4343.00 00:25:44.114 clat (usec): min=972, max=216844, avg=105104.20, stdev=37934.28 00:25:44.114 lat (usec): min=997, max=216884, avg=106479.29, stdev=38430.23 00:25:44.114 clat percentiles (msec): 00:25:44.114 | 1.00th=[ 9], 5.00th=[ 40], 10.00th=[ 59], 20.00th=[ 74], 00:25:44.114 | 30.00th=[ 83], 40.00th=[ 93], 50.00th=[ 108], 60.00th=[ 121], 00:25:44.114 | 70.00th=[ 129], 80.00th=[ 140], 90.00th=[ 150], 95.00th=[ 161], 00:25:44.114 | 99.00th=[ 194], 99.50th=[ 203], 99.90th=[ 209], 99.95th=[ 209], 00:25:44.114 | 99.99th=[ 218] 00:25:44.114 bw ( KiB/s): min=120832, max=228864, per=8.49%, avg=153169.85, stdev=35371.02, samples=20 00:25:44.114 iops : min= 472, max= 894, avg=598.25, stdev=138.16, samples=20 00:25:44.114 lat (usec) : 1000=0.02% 00:25:44.114 lat (msec) : 2=0.02%, 4=0.08%, 10=1.19%, 20=0.96%, 50=5.04% 00:25:44.114 lat (msec) : 100=38.76%, 250=53.94% 00:25:44.114 cpu : usr=0.35%, sys=2.05%, ctx=1468, majf=0, minf=4097 00:25:44.114 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:44.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:44.114 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:44.115 issued rwts: total=6048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:44.115 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:44.115 job4: (groupid=0, jobs=1): err= 0: pid=3602482: Fri Jul 26 22:55:34 2024 00:25:44.115 read: IOPS=782, BW=196MiB/s (205MB/s)(1978MiB/10107msec) 00:25:44.115 slat (usec): min=9, max=83935, avg=981.37, stdev=3583.20 00:25:44.115 clat (usec): min=1403, max=319703, avg=80714.28, stdev=49335.63 00:25:44.115 lat (usec): min=1972, max=330910, avg=81695.64, stdev=49852.37 00:25:44.115 clat percentiles (msec): 00:25:44.115 | 1.00th=[ 6], 5.00th=[ 20], 10.00th=[ 34], 20.00th=[ 37], 00:25:44.115 | 30.00th=[ 43], 40.00th=[ 61], 50.00th=[ 73], 60.00th=[ 89], 00:25:44.115 | 70.00th=[ 101], 80.00th=[ 116], 90.00th=[ 142], 95.00th=[ 165], 00:25:44.115 | 99.00th=[ 247], 99.50th=[ 275], 99.90th=[ 309], 99.95th=[ 317], 00:25:44.115 | 99.99th=[ 321] 00:25:44.115 bw ( KiB/s): min=71168, max=407760, per=11.13%, avg=200823.65, stdev=86203.97, samples=20 00:25:44.115 iops : min= 278, max= 1592, avg=784.40, stdev=336.63, samples=20 00:25:44.115 lat (msec) : 2=0.04%, 4=0.37%, 10=1.93%, 20=2.83%, 50=29.70% 00:25:44.115 lat (msec) : 100=34.59%, 250=29.60%, 500=0.94% 00:25:44.115 cpu : usr=0.52%, sys=2.36%, ctx=1940, majf=0, minf=4097 00:25:44.115 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:44.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:44.115 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:44.115 issued rwts: total=7912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:44.115 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:44.115 job5: (groupid=0, jobs=1): err= 0: pid=3602483: Fri Jul 26 22:55:34 2024 00:25:44.115 read: IOPS=600, BW=150MiB/s (157MB/s)(1516MiB/10101msec) 00:25:44.115 slat (usec): min=9, max=68581, avg=1323.58, stdev=3993.39 00:25:44.115 clat (msec): min=3, max=201, avg=105.24, stdev=40.06 00:25:44.115 lat (msec): min=3, max=201, avg=106.56, stdev=40.56 00:25:44.115 clat percentiles (msec): 00:25:44.115 | 1.00th=[ 7], 5.00th=[ 39], 10.00th=[ 50], 20.00th=[ 65], 00:25:44.115 | 30.00th=[ 86], 40.00th=[ 99], 50.00th=[ 114], 60.00th=[ 125], 00:25:44.115 | 70.00th=[ 133], 80.00th=[ 142], 90.00th=[ 153], 95.00th=[ 159], 00:25:44.115 | 99.00th=[ 176], 99.50th=[ 188], 99.90th=[ 199], 99.95th=[ 199], 00:25:44.115 | 99.99th=[ 203] 00:25:44.115 bw ( KiB/s): min=107008, max=280526, per=8.51%, avg=153526.65, stdev=49876.04, samples=20 00:25:44.115 iops : min= 418, max= 1095, avg=599.65, stdev=194.70, samples=20 00:25:44.115 lat (msec) : 4=0.45%, 10=1.70%, 20=0.91%, 50=7.95%, 100=29.97% 00:25:44.115 lat (msec) : 250=59.02% 00:25:44.115 cpu : usr=0.45%, sys=1.87%, ctx=1543, majf=0, minf=4097 00:25:44.115 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:44.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:44.115 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:44.115 issued rwts: total=6062,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:44.115 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:44.115 job6: (groupid=0, jobs=1): err= 0: pid=3602492: Fri Jul 26 22:55:34 2024 00:25:44.115 read: IOPS=538, BW=135MiB/s (141MB/s)(1360MiB/10097msec) 00:25:44.115 slat (usec): min=13, max=42861, avg=1682.34, stdev=4441.63 00:25:44.115 clat (msec): min=32, max=209, avg=117.02, stdev=29.73 00:25:44.115 lat (msec): min=32, max=215, avg=118.71, stdev=30.14 00:25:44.115 clat percentiles (msec): 00:25:44.115 | 1.00th=[ 53], 5.00th=[ 70], 10.00th=[ 78], 20.00th=[ 88], 00:25:44.115 | 30.00th=[ 99], 40.00th=[ 111], 50.00th=[ 121], 60.00th=[ 128], 00:25:44.115 | 70.00th=[ 136], 80.00th=[ 144], 90.00th=[ 155], 95.00th=[ 161], 00:25:44.115 | 99.00th=[ 180], 99.50th=[ 186], 99.90th=[ 211], 99.95th=[ 211], 00:25:44.115 | 99.99th=[ 211] 00:25:44.115 bw ( KiB/s): min=105984, max=184320, per=7.62%, avg=137621.00, stdev=27034.91, samples=20 00:25:44.115 iops : min= 414, max= 720, avg=537.50, stdev=105.49, samples=20 00:25:44.115 lat (msec) : 50=0.59%, 100=30.75%, 250=68.66% 00:25:44.115 cpu : usr=0.36%, sys=1.77%, ctx=1274, majf=0, minf=4097 00:25:44.115 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:44.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:44.115 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:44.115 issued rwts: total=5440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:44.115 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:44.115 job7: (groupid=0, jobs=1): err= 0: pid=3602493: Fri Jul 26 22:55:34 2024 00:25:44.115 read: IOPS=560, BW=140MiB/s (147MB/s)(1416MiB/10096msec) 00:25:44.115 slat (usec): min=9, max=60604, avg=1422.49, stdev=4157.42 00:25:44.115 clat (msec): min=3, max=304, avg=112.60, stdev=40.91 00:25:44.115 lat (msec): min=3, max=326, avg=114.02, stdev=41.43 00:25:44.115 clat percentiles (msec): 00:25:44.115 | 1.00th=[ 27], 5.00th=[ 47], 10.00th=[ 67], 20.00th=[ 83], 00:25:44.115 | 30.00th=[ 93], 40.00th=[ 102], 50.00th=[ 110], 60.00th=[ 118], 00:25:44.115 | 70.00th=[ 129], 80.00th=[ 142], 90.00th=[ 161], 95.00th=[ 186], 00:25:44.115 | 99.00th=[ 239], 99.50th=[ 284], 99.90th=[ 300], 99.95th=[ 300], 00:25:44.115 | 99.99th=[ 305] 00:25:44.115 bw ( KiB/s): min=71168, max=243712, per=7.94%, avg=143322.00, stdev=41763.50, samples=20 00:25:44.115 iops : min= 278, max= 952, avg=559.75, stdev=163.16, samples=20 00:25:44.115 lat (msec) : 4=0.04%, 10=0.35%, 20=0.44%, 50=4.70%, 100=32.83% 00:25:44.115 lat (msec) : 250=60.82%, 500=0.83% 00:25:44.115 cpu : usr=0.41%, sys=1.72%, ctx=1423, majf=0, minf=4097 00:25:44.115 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:44.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:44.115 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:44.115 issued rwts: total=5663,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:44.115 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:44.115 job8: (groupid=0, jobs=1): err= 0: pid=3602494: Fri Jul 26 22:55:34 2024 00:25:44.115 read: IOPS=703, BW=176MiB/s (184MB/s)(1763MiB/10017msec) 00:25:44.115 slat (usec): min=12, max=82813, avg=1282.37, stdev=3932.95 00:25:44.115 clat (usec): min=1764, max=207917, avg=89604.24, stdev=38406.27 00:25:44.115 lat (usec): min=1782, max=207936, avg=90886.61, stdev=38971.43 00:25:44.115 clat percentiles (msec): 00:25:44.115 | 1.00th=[ 15], 5.00th=[ 34], 10.00th=[ 42], 20.00th=[ 51], 00:25:44.115 | 30.00th=[ 66], 40.00th=[ 79], 50.00th=[ 90], 60.00th=[ 99], 00:25:44.115 | 70.00th=[ 110], 80.00th=[ 125], 90.00th=[ 140], 95.00th=[ 157], 00:25:44.115 | 99.00th=[ 184], 99.50th=[ 188], 99.90th=[ 199], 99.95th=[ 199], 00:25:44.115 | 99.99th=[ 209] 00:25:44.115 bw ( KiB/s): min=104448, max=301056, per=9.91%, avg=178842.75, stdev=59419.91, samples=20 00:25:44.115 iops : min= 408, max= 1176, avg=698.60, stdev=232.11, samples=20 00:25:44.115 lat (msec) : 2=0.07%, 4=0.10%, 10=0.55%, 20=1.15%, 50=17.93% 00:25:44.115 lat (msec) : 100=41.77%, 250=38.43% 00:25:44.115 cpu : usr=0.46%, sys=2.16%, ctx=1548, majf=0, minf=4097 00:25:44.115 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:44.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:44.115 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:44.115 issued rwts: total=7050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:44.115 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:44.115 job9: (groupid=0, jobs=1): err= 0: pid=3602497: Fri Jul 26 22:55:34 2024 00:25:44.115 read: IOPS=635, BW=159MiB/s (167MB/s)(1599MiB/10058msec) 00:25:44.115 slat (usec): min=14, max=110032, avg=1420.09, stdev=4348.37 00:25:44.115 clat (msec): min=3, max=256, avg=99.17, stdev=37.89 00:25:44.115 lat (msec): min=3, max=301, avg=100.59, stdev=38.59 00:25:44.115 clat percentiles (msec): 00:25:44.115 | 1.00th=[ 17], 5.00th=[ 40], 10.00th=[ 53], 20.00th=[ 70], 00:25:44.115 | 30.00th=[ 82], 40.00th=[ 89], 50.00th=[ 97], 60.00th=[ 106], 00:25:44.115 | 70.00th=[ 114], 80.00th=[ 128], 90.00th=[ 148], 95.00th=[ 167], 00:25:44.115 | 99.00th=[ 205], 99.50th=[ 220], 99.90th=[ 230], 99.95th=[ 230], 00:25:44.115 | 99.99th=[ 257] 00:25:44.115 bw ( KiB/s): min=72704, max=337920, per=8.98%, avg=162069.40, stdev=58714.33, samples=20 00:25:44.115 iops : min= 284, max= 1320, avg=632.95, stdev=229.44, samples=20 00:25:44.115 lat (msec) : 4=0.05%, 10=0.33%, 20=1.20%, 50=7.44%, 100=44.83% 00:25:44.115 lat (msec) : 250=46.10%, 500=0.05% 00:25:44.115 cpu : usr=0.36%, sys=2.14%, ctx=1533, majf=0, minf=4097 00:25:44.115 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:44.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:44.115 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:44.115 issued rwts: total=6395,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:44.115 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:44.115 job10: (groupid=0, jobs=1): err= 0: pid=3602498: Fri Jul 26 22:55:34 2024 00:25:44.115 read: IOPS=715, BW=179MiB/s (188MB/s)(1795MiB/10036msec) 00:25:44.115 slat (usec): min=10, max=214485, avg=1229.77, stdev=4663.57 00:25:44.115 clat (usec): min=1051, max=317056, avg=88191.52, stdev=44937.10 00:25:44.115 lat (usec): min=1093, max=349649, avg=89421.29, stdev=45510.93 00:25:44.115 clat percentiles (msec): 00:25:44.115 | 1.00th=[ 7], 5.00th=[ 21], 10.00th=[ 36], 20.00th=[ 57], 00:25:44.115 | 30.00th=[ 69], 40.00th=[ 78], 50.00th=[ 86], 60.00th=[ 93], 00:25:44.115 | 70.00th=[ 102], 80.00th=[ 112], 90.00th=[ 136], 95.00th=[ 169], 00:25:44.115 | 99.00th=[ 255], 99.50th=[ 313], 99.90th=[ 317], 99.95th=[ 317], 00:25:44.115 | 99.99th=[ 317] 00:25:44.115 bw ( KiB/s): min=46080, max=337920, per=10.09%, avg=182140.40, stdev=55133.66, samples=20 00:25:44.115 iops : min= 180, max= 1320, avg=711.40, stdev=215.43, samples=20 00:25:44.115 lat (msec) : 2=0.15%, 4=0.25%, 10=1.89%, 20=2.31%, 50=11.52% 00:25:44.115 lat (msec) : 100=52.29%, 250=30.55%, 500=1.03% 00:25:44.115 cpu : usr=0.37%, sys=2.28%, ctx=1636, majf=0, minf=4097 00:25:44.115 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:44.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:44.115 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:44.115 issued rwts: total=7179,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:44.115 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:44.115 00:25:44.115 Run status group 0 (all jobs): 00:25:44.115 READ: bw=1763MiB/s (1848MB/s), 135MiB/s-196MiB/s (141MB/s-205MB/s), io=17.4GiB (18.7GB), run=10017-10107msec 00:25:44.116 00:25:44.116 Disk stats (read/write): 00:25:44.116 nvme0n1: ios=14248/0, merge=0/0, ticks=1236987/0, in_queue=1236987, util=97.18% 00:25:44.116 nvme10n1: ios=12560/0, merge=0/0, ticks=1237820/0, in_queue=1237820, util=97.39% 00:25:44.116 nvme1n1: ios=11533/0, merge=0/0, ticks=1234864/0, in_queue=1234864, util=97.66% 00:25:44.116 nvme2n1: ios=11908/0, merge=0/0, ticks=1234750/0, in_queue=1234750, util=97.80% 00:25:44.116 nvme3n1: ios=15530/0, merge=0/0, ticks=1233990/0, in_queue=1233990, util=97.87% 00:25:44.116 nvme4n1: ios=11908/0, merge=0/0, ticks=1231165/0, in_queue=1231165, util=98.18% 00:25:44.116 nvme5n1: ios=10686/0, merge=0/0, ticks=1228648/0, in_queue=1228648, util=98.35% 00:25:44.116 nvme6n1: ios=11140/0, merge=0/0, ticks=1231829/0, in_queue=1231829, util=98.48% 00:25:44.116 nvme7n1: ios=13873/0, merge=0/0, ticks=1237218/0, in_queue=1237218, util=98.88% 00:25:44.116 nvme8n1: ios=12546/0, merge=0/0, ticks=1235791/0, in_queue=1235791, util=99.07% 00:25:44.116 nvme9n1: ios=14151/0, merge=0/0, ticks=1232463/0, in_queue=1232463, util=99.20% 00:25:44.116 22:55:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:44.116 [global] 00:25:44.116 thread=1 00:25:44.116 invalidate=1 00:25:44.116 rw=randwrite 00:25:44.116 time_based=1 00:25:44.116 runtime=10 00:25:44.116 ioengine=libaio 00:25:44.116 direct=1 00:25:44.116 bs=262144 00:25:44.116 iodepth=64 00:25:44.116 norandommap=1 00:25:44.116 numjobs=1 00:25:44.116 00:25:44.116 [job0] 00:25:44.116 filename=/dev/nvme0n1 00:25:44.116 [job1] 00:25:44.116 filename=/dev/nvme10n1 00:25:44.116 [job2] 00:25:44.116 filename=/dev/nvme1n1 00:25:44.116 [job3] 00:25:44.116 filename=/dev/nvme2n1 00:25:44.116 [job4] 00:25:44.116 filename=/dev/nvme3n1 00:25:44.116 [job5] 00:25:44.116 filename=/dev/nvme4n1 00:25:44.116 [job6] 00:25:44.116 filename=/dev/nvme5n1 00:25:44.116 [job7] 00:25:44.116 filename=/dev/nvme6n1 00:25:44.116 [job8] 00:25:44.116 filename=/dev/nvme7n1 00:25:44.116 [job9] 00:25:44.116 filename=/dev/nvme8n1 00:25:44.116 [job10] 00:25:44.116 filename=/dev/nvme9n1 00:25:44.116 Could not set queue depth (nvme0n1) 00:25:44.116 Could not set queue depth (nvme10n1) 00:25:44.116 Could not set queue depth (nvme1n1) 00:25:44.116 Could not set queue depth (nvme2n1) 00:25:44.116 Could not set queue depth (nvme3n1) 00:25:44.116 Could not set queue depth (nvme4n1) 00:25:44.116 Could not set queue depth (nvme5n1) 00:25:44.116 Could not set queue depth (nvme6n1) 00:25:44.116 Could not set queue depth (nvme7n1) 00:25:44.116 Could not set queue depth (nvme8n1) 00:25:44.116 Could not set queue depth (nvme9n1) 00:25:44.116 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:44.116 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:44.116 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:44.116 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:44.116 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:44.116 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:44.116 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:44.116 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:44.116 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:44.116 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:44.116 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:44.116 fio-3.35 00:25:44.116 Starting 11 threads 00:25:54.098 00:25:54.098 job0: (groupid=0, jobs=1): err= 0: pid=3603513: Fri Jul 26 22:55:45 2024 00:25:54.098 write: IOPS=424, BW=106MiB/s (111MB/s)(1069MiB/10075msec); 0 zone resets 00:25:54.098 slat (usec): min=24, max=100435, avg=2136.13, stdev=5516.19 00:25:54.098 clat (msec): min=2, max=416, avg=148.57, stdev=74.32 00:25:54.098 lat (msec): min=6, max=416, avg=150.71, stdev=75.33 00:25:54.098 clat percentiles (msec): 00:25:54.098 | 1.00th=[ 14], 5.00th=[ 45], 10.00th=[ 62], 20.00th=[ 85], 00:25:54.098 | 30.00th=[ 95], 40.00th=[ 121], 50.00th=[ 142], 60.00th=[ 161], 00:25:54.098 | 70.00th=[ 190], 80.00th=[ 213], 90.00th=[ 236], 95.00th=[ 279], 00:25:54.098 | 99.00th=[ 355], 99.50th=[ 359], 99.90th=[ 414], 99.95th=[ 418], 00:25:54.098 | 99.99th=[ 418] 00:25:54.098 bw ( KiB/s): min=51200, max=223232, per=8.08%, avg=107827.20, stdev=45934.97, samples=20 00:25:54.098 iops : min= 200, max= 872, avg=421.20, stdev=179.43, samples=20 00:25:54.098 lat (msec) : 4=0.05%, 10=0.51%, 20=1.05%, 50=4.37%, 100=26.36% 00:25:54.098 lat (msec) : 250=61.08%, 500=6.57% 00:25:54.098 cpu : usr=1.36%, sys=1.42%, ctx=1386, majf=0, minf=1 00:25:54.098 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:25:54.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:54.098 issued rwts: total=0,4275,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.098 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:54.098 job1: (groupid=0, jobs=1): err= 0: pid=3603525: Fri Jul 26 22:55:45 2024 00:25:54.098 write: IOPS=406, BW=102MiB/s (107MB/s)(1031MiB/10137msec); 0 zone resets 00:25:54.098 slat (usec): min=17, max=95969, avg=1794.81, stdev=5392.83 00:25:54.098 clat (msec): min=2, max=470, avg=155.36, stdev=72.02 00:25:54.098 lat (msec): min=2, max=470, avg=157.16, stdev=72.85 00:25:54.098 clat percentiles (msec): 00:25:54.098 | 1.00th=[ 13], 5.00th=[ 46], 10.00th=[ 74], 20.00th=[ 111], 00:25:54.098 | 30.00th=[ 118], 40.00th=[ 130], 50.00th=[ 142], 60.00th=[ 155], 00:25:54.098 | 70.00th=[ 178], 80.00th=[ 207], 90.00th=[ 257], 95.00th=[ 300], 00:25:54.098 | 99.00th=[ 368], 99.50th=[ 380], 99.90th=[ 468], 99.95th=[ 472], 00:25:54.098 | 99.99th=[ 472] 00:25:54.098 bw ( KiB/s): min=40960, max=132608, per=7.79%, avg=103996.70, stdev=28372.87, samples=20 00:25:54.098 iops : min= 160, max= 518, avg=406.20, stdev=110.84, samples=20 00:25:54.098 lat (msec) : 4=0.07%, 10=0.65%, 20=1.43%, 50=3.93%, 100=8.36% 00:25:54.098 lat (msec) : 250=74.52%, 500=11.03% 00:25:54.098 cpu : usr=1.31%, sys=1.38%, ctx=2128, majf=0, minf=1 00:25:54.098 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:54.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:54.098 issued rwts: total=0,4125,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.098 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:54.098 job2: (groupid=0, jobs=1): err= 0: pid=3603526: Fri Jul 26 22:55:45 2024 00:25:54.098 write: IOPS=452, BW=113MiB/s (119MB/s)(1146MiB/10139msec); 0 zone resets 00:25:54.098 slat (usec): min=23, max=74147, avg=1741.26, stdev=5048.27 00:25:54.098 clat (msec): min=2, max=426, avg=139.69, stdev=83.60 00:25:54.098 lat (msec): min=2, max=426, avg=141.43, stdev=84.74 00:25:54.098 clat percentiles (msec): 00:25:54.098 | 1.00th=[ 12], 5.00th=[ 27], 10.00th=[ 41], 20.00th=[ 57], 00:25:54.098 | 30.00th=[ 81], 40.00th=[ 107], 50.00th=[ 124], 60.00th=[ 155], 00:25:54.098 | 70.00th=[ 194], 80.00th=[ 224], 90.00th=[ 249], 95.00th=[ 268], 00:25:54.098 | 99.00th=[ 384], 99.50th=[ 405], 99.90th=[ 418], 99.95th=[ 426], 00:25:54.098 | 99.99th=[ 426] 00:25:54.098 bw ( KiB/s): min=61440, max=251392, per=8.67%, avg=115737.60, stdev=50447.62, samples=20 00:25:54.098 iops : min= 240, max= 982, avg=452.10, stdev=197.06, samples=20 00:25:54.098 lat (msec) : 4=0.04%, 10=0.65%, 20=2.49%, 50=12.85%, 100=22.64% 00:25:54.098 lat (msec) : 250=51.77%, 500=9.55% 00:25:54.098 cpu : usr=1.34%, sys=1.57%, ctx=2334, majf=0, minf=1 00:25:54.098 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:54.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:54.098 issued rwts: total=0,4584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.098 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:54.098 job3: (groupid=0, jobs=1): err= 0: pid=3603527: Fri Jul 26 22:55:45 2024 00:25:54.098 write: IOPS=486, BW=122MiB/s (128MB/s)(1230MiB/10112msec); 0 zone resets 00:25:54.098 slat (usec): min=17, max=114919, avg=1424.57, stdev=4346.98 00:25:54.098 clat (msec): min=2, max=390, avg=130.02, stdev=73.08 00:25:54.098 lat (msec): min=2, max=390, avg=131.44, stdev=73.97 00:25:54.098 clat percentiles (msec): 00:25:54.098 | 1.00th=[ 11], 5.00th=[ 30], 10.00th=[ 51], 20.00th=[ 64], 00:25:54.098 | 30.00th=[ 82], 40.00th=[ 90], 50.00th=[ 121], 60.00th=[ 144], 00:25:54.098 | 70.00th=[ 169], 80.00th=[ 197], 90.00th=[ 226], 95.00th=[ 257], 00:25:54.098 | 99.00th=[ 334], 99.50th=[ 351], 99.90th=[ 380], 99.95th=[ 388], 00:25:54.098 | 99.99th=[ 393] 00:25:54.098 bw ( KiB/s): min=52224, max=244736, per=9.32%, avg=124383.60, stdev=45872.42, samples=20 00:25:54.098 iops : min= 204, max= 956, avg=485.85, stdev=179.15, samples=20 00:25:54.098 lat (msec) : 4=0.14%, 10=0.83%, 20=2.30%, 50=6.54%, 100=33.94% 00:25:54.098 lat (msec) : 250=50.31%, 500=5.93% 00:25:54.098 cpu : usr=1.43%, sys=1.82%, ctx=2752, majf=0, minf=1 00:25:54.098 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:54.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:54.098 issued rwts: total=0,4921,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.098 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:54.098 job4: (groupid=0, jobs=1): err= 0: pid=3603528: Fri Jul 26 22:55:45 2024 00:25:54.098 write: IOPS=466, BW=117MiB/s (122MB/s)(1174MiB/10068msec); 0 zone resets 00:25:54.098 slat (usec): min=23, max=150084, avg=1594.48, stdev=4889.74 00:25:54.098 clat (msec): min=3, max=402, avg=135.62, stdev=70.19 00:25:54.098 lat (msec): min=3, max=402, avg=137.21, stdev=71.07 00:25:54.098 clat percentiles (msec): 00:25:54.098 | 1.00th=[ 13], 5.00th=[ 27], 10.00th=[ 47], 20.00th=[ 87], 00:25:54.098 | 30.00th=[ 106], 40.00th=[ 114], 50.00th=[ 123], 60.00th=[ 134], 00:25:54.098 | 70.00th=[ 155], 80.00th=[ 186], 90.00th=[ 249], 95.00th=[ 271], 00:25:54.098 | 99.00th=[ 321], 99.50th=[ 338], 99.90th=[ 393], 99.95th=[ 397], 00:25:54.098 | 99.99th=[ 401] 00:25:54.098 bw ( KiB/s): min=55808, max=190976, per=8.89%, avg=118553.60, stdev=39512.78, samples=20 00:25:54.098 iops : min= 218, max= 746, avg=463.10, stdev=154.35, samples=20 00:25:54.098 lat (msec) : 4=0.02%, 10=0.58%, 20=2.49%, 50=7.58%, 100=15.81% 00:25:54.098 lat (msec) : 250=64.29%, 500=9.22% 00:25:54.098 cpu : usr=1.40%, sys=1.36%, ctx=2493, majf=0, minf=1 00:25:54.098 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:54.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:54.098 issued rwts: total=0,4694,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.098 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:54.098 job5: (groupid=0, jobs=1): err= 0: pid=3603533: Fri Jul 26 22:55:45 2024 00:25:54.098 write: IOPS=438, BW=110MiB/s (115MB/s)(1107MiB/10095msec); 0 zone resets 00:25:54.098 slat (usec): min=17, max=78012, avg=1618.23, stdev=4294.83 00:25:54.098 clat (usec): min=1774, max=352772, avg=144250.04, stdev=66943.53 00:25:54.098 lat (usec): min=1822, max=352856, avg=145868.27, stdev=67774.84 00:25:54.098 clat percentiles (msec): 00:25:54.098 | 1.00th=[ 9], 5.00th=[ 30], 10.00th=[ 62], 20.00th=[ 101], 00:25:54.098 | 30.00th=[ 113], 40.00th=[ 122], 50.00th=[ 136], 60.00th=[ 153], 00:25:54.098 | 70.00th=[ 171], 80.00th=[ 197], 90.00th=[ 226], 95.00th=[ 279], 00:25:54.099 | 99.00th=[ 330], 99.50th=[ 342], 99.90th=[ 351], 99.95th=[ 355], 00:25:54.099 | 99.99th=[ 355] 00:25:54.099 bw ( KiB/s): min=51200, max=158208, per=8.37%, avg=111718.40, stdev=30387.01, samples=20 00:25:54.099 iops : min= 200, max= 618, avg=436.40, stdev=118.70, samples=20 00:25:54.099 lat (msec) : 2=0.02%, 4=0.18%, 10=1.20%, 20=0.95%, 50=5.71% 00:25:54.099 lat (msec) : 100=12.04%, 250=72.78%, 500=7.12% 00:25:54.099 cpu : usr=1.31%, sys=1.53%, ctx=2300, majf=0, minf=1 00:25:54.099 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:54.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:54.099 issued rwts: total=0,4427,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.099 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:54.099 job6: (groupid=0, jobs=1): err= 0: pid=3603545: Fri Jul 26 22:55:45 2024 00:25:54.099 write: IOPS=527, BW=132MiB/s (138MB/s)(1333MiB/10112msec); 0 zone resets 00:25:54.099 slat (usec): min=20, max=231654, avg=1195.37, stdev=5136.78 00:25:54.099 clat (usec): min=1849, max=612452, avg=120089.66, stdev=89120.25 00:25:54.099 lat (usec): min=1875, max=612536, avg=121285.03, stdev=89688.81 00:25:54.099 clat percentiles (msec): 00:25:54.099 | 1.00th=[ 6], 5.00th=[ 14], 10.00th=[ 25], 20.00th=[ 54], 00:25:54.099 | 30.00th=[ 82], 40.00th=[ 85], 50.00th=[ 92], 60.00th=[ 122], 00:25:54.099 | 70.00th=[ 144], 80.00th=[ 165], 90.00th=[ 245], 95.00th=[ 292], 00:25:54.099 | 99.00th=[ 397], 99.50th=[ 592], 99.90th=[ 609], 99.95th=[ 609], 00:25:54.099 | 99.99th=[ 617] 00:25:54.099 bw ( KiB/s): min=53248, max=201216, per=10.11%, avg=134886.40, stdev=45634.25, samples=20 00:25:54.099 iops : min= 208, max= 786, avg=526.90, stdev=178.26, samples=20 00:25:54.099 lat (msec) : 2=0.06%, 4=0.47%, 10=2.61%, 20=4.89%, 50=10.93% 00:25:54.099 lat (msec) : 100=35.28%, 250=36.31%, 500=8.72%, 750=0.73% 00:25:54.099 cpu : usr=1.49%, sys=1.76%, ctx=3114, majf=0, minf=1 00:25:54.099 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:54.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:54.099 issued rwts: total=0,5332,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.099 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:54.099 job7: (groupid=0, jobs=1): err= 0: pid=3603553: Fri Jul 26 22:55:45 2024 00:25:54.099 write: IOPS=600, BW=150MiB/s (157MB/s)(1516MiB/10104msec); 0 zone resets 00:25:54.099 slat (usec): min=19, max=47843, avg=1124.90, stdev=2828.12 00:25:54.099 clat (msec): min=3, max=349, avg=105.45, stdev=54.10 00:25:54.099 lat (msec): min=4, max=354, avg=106.57, stdev=54.60 00:25:54.099 clat percentiles (msec): 00:25:54.099 | 1.00th=[ 12], 5.00th=[ 29], 10.00th=[ 44], 20.00th=[ 78], 00:25:54.099 | 30.00th=[ 84], 40.00th=[ 86], 50.00th=[ 89], 60.00th=[ 102], 00:25:54.099 | 70.00th=[ 122], 80.00th=[ 140], 90.00th=[ 178], 95.00th=[ 215], 00:25:54.099 | 99.00th=[ 275], 99.50th=[ 321], 99.90th=[ 347], 99.95th=[ 347], 00:25:54.099 | 99.99th=[ 351] 00:25:54.099 bw ( KiB/s): min=79360, max=192512, per=11.52%, avg=153664.30, stdev=33634.22, samples=20 00:25:54.099 iops : min= 310, max= 752, avg=600.25, stdev=131.38, samples=20 00:25:54.099 lat (msec) : 4=0.02%, 10=0.74%, 20=1.80%, 50=9.53%, 100=46.91% 00:25:54.099 lat (msec) : 250=38.43%, 500=2.57% 00:25:54.099 cpu : usr=1.92%, sys=2.25%, ctx=3206, majf=0, minf=1 00:25:54.099 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:54.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:54.099 issued rwts: total=0,6065,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.099 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:54.099 job8: (groupid=0, jobs=1): err= 0: pid=3603592: Fri Jul 26 22:55:45 2024 00:25:54.099 write: IOPS=503, BW=126MiB/s (132MB/s)(1272MiB/10105msec); 0 zone resets 00:25:54.099 slat (usec): min=17, max=98986, avg=1262.89, stdev=4253.00 00:25:54.099 clat (usec): min=1896, max=500665, avg=125779.55, stdev=82612.71 00:25:54.099 lat (msec): min=2, max=500, avg=127.04, stdev=83.45 00:25:54.099 clat percentiles (msec): 00:25:54.099 | 1.00th=[ 6], 5.00th=[ 16], 10.00th=[ 32], 20.00th=[ 55], 00:25:54.099 | 30.00th=[ 74], 40.00th=[ 92], 50.00th=[ 111], 60.00th=[ 126], 00:25:54.099 | 70.00th=[ 167], 80.00th=[ 201], 90.00th=[ 230], 95.00th=[ 268], 00:25:54.099 | 99.00th=[ 372], 99.50th=[ 393], 99.90th=[ 502], 99.95th=[ 502], 00:25:54.099 | 99.99th=[ 502] 00:25:54.099 bw ( KiB/s): min=71680, max=248832, per=9.64%, avg=128640.00, stdev=50074.95, samples=20 00:25:54.099 iops : min= 280, max= 972, avg=502.50, stdev=195.61, samples=20 00:25:54.099 lat (msec) : 2=0.02%, 4=0.20%, 10=3.42%, 20=2.38%, 50=11.81% 00:25:54.099 lat (msec) : 100=27.65%, 250=46.95%, 500=7.47%, 750=0.10% 00:25:54.099 cpu : usr=1.60%, sys=1.73%, ctx=3069, majf=0, minf=1 00:25:54.099 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:54.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:54.099 issued rwts: total=0,5088,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.099 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:54.099 job9: (groupid=0, jobs=1): err= 0: pid=3603619: Fri Jul 26 22:55:45 2024 00:25:54.099 write: IOPS=446, BW=112MiB/s (117MB/s)(1132MiB/10137msec); 0 zone resets 00:25:54.099 slat (usec): min=24, max=69889, avg=1697.48, stdev=4371.38 00:25:54.099 clat (msec): min=2, max=414, avg=141.51, stdev=76.56 00:25:54.099 lat (msec): min=2, max=419, avg=143.21, stdev=77.70 00:25:54.099 clat percentiles (msec): 00:25:54.099 | 1.00th=[ 11], 5.00th=[ 27], 10.00th=[ 43], 20.00th=[ 85], 00:25:54.099 | 30.00th=[ 103], 40.00th=[ 115], 50.00th=[ 125], 60.00th=[ 146], 00:25:54.099 | 70.00th=[ 176], 80.00th=[ 211], 90.00th=[ 234], 95.00th=[ 279], 00:25:54.099 | 99.00th=[ 372], 99.50th=[ 401], 99.90th=[ 414], 99.95th=[ 414], 00:25:54.099 | 99.99th=[ 414] 00:25:54.099 bw ( KiB/s): min=45056, max=217600, per=8.57%, avg=114316.95, stdev=46262.89, samples=20 00:25:54.099 iops : min= 176, max= 850, avg=446.55, stdev=180.71, samples=20 00:25:54.099 lat (msec) : 4=0.13%, 10=0.64%, 20=2.27%, 50=8.68%, 100=17.38% 00:25:54.099 lat (msec) : 250=63.41%, 500=7.49% 00:25:54.099 cpu : usr=1.38%, sys=1.51%, ctx=2378, majf=0, minf=1 00:25:54.099 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:54.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:54.099 issued rwts: total=0,4528,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.099 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:54.099 job10: (groupid=0, jobs=1): err= 0: pid=3603634: Fri Jul 26 22:55:45 2024 00:25:54.099 write: IOPS=475, BW=119MiB/s (125MB/s)(1200MiB/10098msec); 0 zone resets 00:25:54.099 slat (usec): min=22, max=31529, avg=1812.81, stdev=4120.50 00:25:54.099 clat (msec): min=5, max=352, avg=132.70, stdev=67.30 00:25:54.099 lat (msec): min=5, max=352, avg=134.51, stdev=68.28 00:25:54.099 clat percentiles (msec): 00:25:54.099 | 1.00th=[ 13], 5.00th=[ 47], 10.00th=[ 55], 20.00th=[ 72], 00:25:54.099 | 30.00th=[ 89], 40.00th=[ 109], 50.00th=[ 118], 60.00th=[ 140], 00:25:54.099 | 70.00th=[ 165], 80.00th=[ 192], 90.00th=[ 228], 95.00th=[ 249], 00:25:54.099 | 99.00th=[ 321], 99.50th=[ 338], 99.90th=[ 351], 99.95th=[ 355], 00:25:54.099 | 99.99th=[ 355] 00:25:54.099 bw ( KiB/s): min=51200, max=238080, per=9.09%, avg=121267.20, stdev=53332.99, samples=20 00:25:54.099 iops : min= 200, max= 930, avg=473.70, stdev=208.33, samples=20 00:25:54.099 lat (msec) : 10=0.54%, 20=1.48%, 50=5.77%, 100=26.33%, 250=61.28% 00:25:54.099 lat (msec) : 500=4.60% 00:25:54.099 cpu : usr=1.30%, sys=1.49%, ctx=1961, majf=0, minf=1 00:25:54.099 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:54.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:54.099 issued rwts: total=0,4801,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.099 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:54.099 00:25:54.099 Run status group 0 (all jobs): 00:25:54.099 WRITE: bw=1303MiB/s (1366MB/s), 102MiB/s-150MiB/s (107MB/s-157MB/s), io=12.9GiB (13.9GB), run=10068-10139msec 00:25:54.099 00:25:54.099 Disk stats (read/write): 00:25:54.099 nvme0n1: ios=52/8303, merge=0/0, ticks=3095/1171665, in_queue=1174760, util=99.75% 00:25:54.099 nvme10n1: ios=45/8042, merge=0/0, ticks=1978/1209902, in_queue=1211880, util=99.93% 00:25:54.099 nvme1n1: ios=46/8975, merge=0/0, ticks=1824/1206872, in_queue=1208696, util=99.88% 00:25:54.099 nvme2n1: ios=0/9642, merge=0/0, ticks=0/1216512, in_queue=1216512, util=97.56% 00:25:54.099 nvme3n1: ios=44/9191, merge=0/0, ticks=241/1218079, in_queue=1218320, util=99.72% 00:25:54.099 nvme4n1: ios=38/8624, merge=0/0, ticks=45/1217473, in_queue=1217518, util=98.13% 00:25:54.099 nvme5n1: ios=38/10480, merge=0/0, ticks=992/1218999, in_queue=1219991, util=99.99% 00:25:54.099 nvme6n1: ios=0/11900, merge=0/0, ticks=0/1221013, in_queue=1221013, util=98.26% 00:25:54.099 nvme7n1: ios=0/9943, merge=0/0, ticks=0/1216463, in_queue=1216463, util=98.72% 00:25:54.099 nvme8n1: ios=43/8851, merge=0/0, ticks=163/1215674, in_queue=1215837, util=100.00% 00:25:54.099 nvme9n1: ios=41/9334, merge=0/0, ticks=1444/1207805, in_queue=1209249, util=99.95% 00:25:54.099 22:55:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:54.099 22:55:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:54.099 22:55:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:54.099 22:55:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:54.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:54.099 22:55:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:54.099 22:55:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:54.099 22:55:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:54.099 22:55:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:25:54.099 22:55:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:54.099 22:55:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK1 00:25:54.099 22:55:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:54.100 22:55:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:54.100 22:55:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.100 22:55:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.100 22:55:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.100 22:55:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:54.100 22:55:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:54.100 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:54.100 22:55:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:54.100 22:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:54.100 22:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:54.100 22:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:25:54.100 22:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:54.100 22:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK2 00:25:54.100 22:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:54.100 22:55:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:54.100 22:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.100 22:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.100 22:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.100 22:55:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:54.100 22:55:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:54.100 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:54.100 22:55:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:54.100 22:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:54.100 22:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:54.100 22:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:25:54.100 22:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:54.100 22:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK3 00:25:54.100 22:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:54.100 22:55:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:54.100 22:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.100 22:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.100 22:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.100 22:55:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:54.100 22:55:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:54.358 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:54.358 22:55:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:54.358 22:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:54.358 22:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:54.358 22:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:25:54.358 22:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:54.358 22:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK4 00:25:54.358 22:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:54.358 22:55:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:54.358 22:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.358 22:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.358 22:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.358 22:55:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:54.358 22:55:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:54.358 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:54.358 22:55:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:54.358 22:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:54.358 22:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:54.358 22:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:25:54.358 22:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:54.358 22:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK5 00:25:54.358 22:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:54.358 22:55:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:54.358 22:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.358 22:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.358 22:55:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.358 22:55:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:54.358 22:55:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:54.924 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:54.924 22:55:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:54.924 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:54.924 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:54.924 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:25:54.924 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:54.924 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK6 00:25:54.924 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:54.924 22:55:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:54.924 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.924 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.924 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.924 22:55:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:54.924 22:55:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:54.924 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:54.924 22:55:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:54.924 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:54.924 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:54.924 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:25:54.924 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:54.924 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK7 00:25:54.924 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:54.924 22:55:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:54.924 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.924 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.924 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.924 22:55:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:54.924 22:55:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:55.183 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:55.183 22:55:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:55.183 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:55.183 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:55.183 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:25:55.183 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:55.183 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK8 00:25:55.183 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:55.183 22:55:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:55.184 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.184 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.184 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.184 22:55:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:55.184 22:55:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:55.184 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:55.184 22:55:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:55.184 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:55.184 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:55.184 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:25:55.184 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:55.184 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK9 00:25:55.184 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:55.184 22:55:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:55.184 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.184 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.184 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.184 22:55:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:55.184 22:55:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:55.442 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK10 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:55.442 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK11 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:55.442 rmmod nvme_tcp 00:25:55.442 rmmod nvme_fabrics 00:25:55.442 rmmod nvme_keyring 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 3598215 ']' 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 3598215 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@946 -- # '[' -z 3598215 ']' 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@950 -- # kill -0 3598215 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # uname 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3598215 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3598215' 00:25:55.442 killing process with pid 3598215 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@965 -- # kill 3598215 00:25:55.442 22:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@970 -- # wait 3598215 00:25:56.009 22:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:56.009 22:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:56.009 22:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:56.009 22:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:56.009 22:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:56.009 22:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:56.009 22:55:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:56.009 22:55:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:58.577 22:55:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:58.577 00:25:58.577 real 0m59.901s 00:25:58.577 user 3m20.203s 00:25:58.577 sys 0m24.354s 00:25:58.577 22:55:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:58.577 22:55:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.577 ************************************ 00:25:58.577 END TEST nvmf_multiconnection 00:25:58.577 ************************************ 00:25:58.577 22:55:50 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:58.577 22:55:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:58.577 22:55:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:58.577 22:55:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:58.577 ************************************ 00:25:58.577 START TEST nvmf_initiator_timeout 00:25:58.577 ************************************ 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:58.577 * Looking for test storage... 00:25:58.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:58.577 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:58.578 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:58.578 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:25:58.578 22:55:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:59.952 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:59.952 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:59.952 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:59.952 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:59.952 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:59.953 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:59.953 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:59.953 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:59.953 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:59.953 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:59.953 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:59.953 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:59.953 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:59.953 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:59.953 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:59.953 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:59.953 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:59.953 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:00.211 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:00.211 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:00.211 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:00.211 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:00.211 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:00.211 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:00.211 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:00.211 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:00.211 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:26:00.211 00:26:00.211 --- 10.0.0.2 ping statistics --- 00:26:00.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.211 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:26:00.211 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:00.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:00.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:26:00.211 00:26:00.211 --- 10.0.0.1 ping statistics --- 00:26:00.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.211 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:26:00.211 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:00.211 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:26:00.211 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:00.211 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:00.211 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:00.211 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:00.211 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:00.211 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:00.211 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:00.211 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:00.211 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:00.211 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:00.211 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:00.211 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=3606899 00:26:00.211 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:00.211 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 3606899 00:26:00.211 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@827 -- # '[' -z 3606899 ']' 00:26:00.211 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:00.211 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:00.211 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:00.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:00.212 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:00.212 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:00.212 [2024-07-26 22:55:52.624268] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:26:00.212 [2024-07-26 22:55:52.624371] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:00.212 EAL: No free 2048 kB hugepages reported on node 1 00:26:00.212 [2024-07-26 22:55:52.693271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:00.470 [2024-07-26 22:55:52.785028] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:00.470 [2024-07-26 22:55:52.785097] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:00.470 [2024-07-26 22:55:52.785125] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:00.470 [2024-07-26 22:55:52.785138] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:00.470 [2024-07-26 22:55:52.785150] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:00.470 [2024-07-26 22:55:52.785237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:00.470 [2024-07-26 22:55:52.785309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:00.470 [2024-07-26 22:55:52.785408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:00.470 [2024-07-26 22:55:52.785411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.470 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:00.470 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # return 0 00:26:00.470 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:00.470 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:00.470 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:00.470 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:00.471 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:00.471 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:00.471 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.471 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:00.471 Malloc0 00:26:00.471 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.471 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:00.471 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.471 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:00.471 Delay0 00:26:00.471 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.471 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:00.471 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.471 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:00.471 [2024-07-26 22:55:52.969805] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:00.729 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.729 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:00.729 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.729 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:00.729 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.729 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:00.729 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.729 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:00.729 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.729 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:00.729 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.729 22:55:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:00.729 [2024-07-26 22:55:52.998034] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:00.729 22:55:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.729 22:55:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:01.292 22:55:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:01.292 22:55:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1194 -- # local i=0 00:26:01.292 22:55:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:01.292 22:55:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:26:01.292 22:55:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # sleep 2 00:26:03.818 22:55:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:03.818 22:55:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:03.818 22:55:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:26:03.818 22:55:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:03.818 22:55:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:03.818 22:55:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # return 0 00:26:03.818 22:55:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3607304 00:26:03.818 22:55:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:03.818 22:55:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:03.818 [global] 00:26:03.818 thread=1 00:26:03.818 invalidate=1 00:26:03.818 rw=write 00:26:03.818 time_based=1 00:26:03.818 runtime=60 00:26:03.818 ioengine=libaio 00:26:03.818 direct=1 00:26:03.818 bs=4096 00:26:03.818 iodepth=1 00:26:03.818 norandommap=0 00:26:03.818 numjobs=1 00:26:03.818 00:26:03.818 verify_dump=1 00:26:03.818 verify_backlog=512 00:26:03.818 verify_state_save=0 00:26:03.818 do_verify=1 00:26:03.818 verify=crc32c-intel 00:26:03.818 [job0] 00:26:03.818 filename=/dev/nvme0n1 00:26:03.818 Could not set queue depth (nvme0n1) 00:26:03.818 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:03.818 fio-3.35 00:26:03.818 Starting 1 thread 00:26:06.344 22:55:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:06.344 22:55:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.344 22:55:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:06.344 true 00:26:06.344 22:55:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.344 22:55:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:06.344 22:55:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.344 22:55:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:06.344 true 00:26:06.344 22:55:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.344 22:55:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:06.344 22:55:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.344 22:55:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:06.344 true 00:26:06.344 22:55:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.344 22:55:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:06.344 22:55:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.344 22:55:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:06.344 true 00:26:06.344 22:55:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.344 22:55:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:09.620 22:56:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:09.620 22:56:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.620 22:56:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:09.620 true 00:26:09.620 22:56:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.620 22:56:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:09.620 22:56:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.620 22:56:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:09.620 true 00:26:09.620 22:56:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.620 22:56:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:09.620 22:56:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.621 22:56:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:09.621 true 00:26:09.621 22:56:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.621 22:56:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:09.621 22:56:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.621 22:56:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:09.621 true 00:26:09.621 22:56:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.621 22:56:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:09.621 22:56:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3607304 00:27:05.821 00:27:05.821 job0: (groupid=0, jobs=1): err= 0: pid=3607375: Fri Jul 26 22:56:56 2024 00:27:05.821 read: IOPS=258, BW=1034KiB/s (1059kB/s)(60.6MiB/60009msec) 00:27:05.821 slat (usec): min=4, max=10050, avg=13.84, stdev=101.06 00:27:05.821 clat (usec): min=324, max=40853k, avg=3512.70, stdev=328054.77 00:27:05.821 lat (usec): min=330, max=40853k, avg=3526.54, stdev=328054.81 00:27:05.821 clat percentiles (usec): 00:27:05.821 | 1.00th=[ 334], 5.00th=[ 343], 10.00th=[ 347], 20.00th=[ 355], 00:27:05.821 | 30.00th=[ 359], 40.00th=[ 367], 50.00th=[ 371], 60.00th=[ 379], 00:27:05.821 | 70.00th=[ 388], 80.00th=[ 404], 90.00th=[ 494], 95.00th=[ 570], 00:27:05.821 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:27:05.821 | 99.99th=[44827] 00:27:05.821 write: IOPS=264, BW=1058KiB/s (1083kB/s)(62.0MiB/60009msec); 0 zone resets 00:27:05.821 slat (nsec): min=5979, max=93713, avg=19125.96, stdev=11575.39 00:27:05.821 clat (usec): min=219, max=2779, avg=306.77, stdev=66.67 00:27:05.821 lat (usec): min=226, max=2786, avg=325.90, stdev=74.07 00:27:05.821 clat percentiles (usec): 00:27:05.821 | 1.00th=[ 229], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 247], 00:27:05.821 | 30.00th=[ 258], 40.00th=[ 277], 50.00th=[ 293], 60.00th=[ 310], 00:27:05.821 | 70.00th=[ 343], 80.00th=[ 363], 90.00th=[ 400], 95.00th=[ 420], 00:27:05.821 | 99.00th=[ 453], 99.50th=[ 461], 99.90th=[ 498], 99.95th=[ 578], 00:27:05.821 | 99.99th=[ 2180] 00:27:05.821 bw ( KiB/s): min= 2688, max= 7352, per=100.00%, avg=5079.04, stdev=1239.30, samples=25 00:27:05.821 iops : min= 672, max= 1838, avg=1269.76, stdev=309.82, samples=25 00:27:05.821 lat (usec) : 250=11.80%, 500=83.40%, 750=4.11%, 1000=0.05% 00:27:05.821 lat (msec) : 2=0.04%, 4=0.01%, 50=0.59%, >=2000=0.01% 00:27:05.821 cpu : usr=0.59%, sys=1.11%, ctx=31385, majf=0, minf=37 00:27:05.821 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:05.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.821 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.821 issued rwts: total=15510,15872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:05.821 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:05.821 00:27:05.821 Run status group 0 (all jobs): 00:27:05.821 READ: bw=1034KiB/s (1059kB/s), 1034KiB/s-1034KiB/s (1059kB/s-1059kB/s), io=60.6MiB (63.5MB), run=60009-60009msec 00:27:05.821 WRITE: bw=1058KiB/s (1083kB/s), 1058KiB/s-1058KiB/s (1083kB/s-1083kB/s), io=62.0MiB (65.0MB), run=60009-60009msec 00:27:05.821 00:27:05.821 Disk stats (read/write): 00:27:05.821 nvme0n1: ios=15606/15872, merge=0/0, ticks=14590/4718, in_queue=19308, util=99.60% 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:05.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1215 -- # local i=0 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # return 0 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:05.821 nvmf hotplug test: fio successful as expected 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:05.821 rmmod nvme_tcp 00:27:05.821 rmmod nvme_fabrics 00:27:05.821 rmmod nvme_keyring 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 3606899 ']' 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 3606899 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@946 -- # '[' -z 3606899 ']' 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # kill -0 3606899 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # uname 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3606899 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3606899' 00:27:05.821 killing process with pid 3606899 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # kill 3606899 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # wait 3606899 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:05.821 22:56:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.080 22:56:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:06.080 00:27:06.080 real 1m7.995s 00:27:06.080 user 4m9.907s 00:27:06.080 sys 0m7.777s 00:27:06.080 22:56:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:06.080 22:56:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:06.080 ************************************ 00:27:06.080 END TEST nvmf_initiator_timeout 00:27:06.080 ************************************ 00:27:06.080 22:56:58 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:27:06.080 22:56:58 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:27:06.080 22:56:58 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:27:06.080 22:56:58 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:27:06.080 22:56:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:07.984 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:07.984 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:07.984 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:07.984 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:07.984 22:57:00 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:08.243 22:57:00 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:27:08.243 22:57:00 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:08.243 22:57:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:08.243 22:57:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:08.243 22:57:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:08.243 ************************************ 00:27:08.243 START TEST nvmf_perf_adq 00:27:08.243 ************************************ 00:27:08.243 22:57:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:08.243 * Looking for test storage... 00:27:08.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:08.243 22:57:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:08.243 22:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:08.243 22:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:08.243 22:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:08.243 22:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:08.243 22:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:08.243 22:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:08.243 22:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:08.243 22:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:08.243 22:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:08.243 22:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:08.243 22:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:08.243 22:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:08.243 22:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:08.243 22:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:08.243 22:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:08.243 22:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:08.243 22:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:08.244 22:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:08.244 22:57:00 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:08.244 22:57:00 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:08.244 22:57:00 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:08.244 22:57:00 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.244 22:57:00 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.244 22:57:00 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.244 22:57:00 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:08.244 22:57:00 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.244 22:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:27:08.244 22:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:08.244 22:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:08.244 22:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:08.244 22:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:08.244 22:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:08.244 22:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:08.244 22:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:08.244 22:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:08.244 22:57:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:08.244 22:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:08.244 22:57:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:10.178 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:10.178 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:10.178 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:10.178 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:27:10.178 22:57:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:10.746 22:57:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:12.643 22:57:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:17.914 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:17.914 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:17.914 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:17.915 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:17.915 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:17.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:17.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:27:17.915 00:27:17.915 --- 10.0.0.2 ping statistics --- 00:27:17.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.915 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:17.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:17.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:27:17.915 00:27:17.915 --- 10.0.0.1 ping statistics --- 00:27:17.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.915 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3619510 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3619510 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 3619510 ']' 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:17.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:17.915 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.915 [2024-07-26 22:57:10.358228] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:17.915 [2024-07-26 22:57:10.358302] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:17.915 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.174 [2024-07-26 22:57:10.428839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:18.174 [2024-07-26 22:57:10.523264] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:18.174 [2024-07-26 22:57:10.523327] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:18.174 [2024-07-26 22:57:10.523344] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:18.174 [2024-07-26 22:57:10.523357] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:18.174 [2024-07-26 22:57:10.523376] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:18.174 [2024-07-26 22:57:10.523459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.174 [2024-07-26 22:57:10.523527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:18.174 [2024-07-26 22:57:10.523618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:18.174 [2024-07-26 22:57:10.523621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.174 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:18.174 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:27:18.174 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:18.174 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:18.174 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:18.174 22:57:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:18.174 22:57:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:27:18.174 22:57:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:18.174 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.174 22:57:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:18.174 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:18.174 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.174 22:57:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:18.174 22:57:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:18.174 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.174 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:18.174 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.174 22:57:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:18.174 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.174 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:18.432 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.432 22:57:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:18.432 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.432 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:18.432 [2024-07-26 22:57:10.736710] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:18.432 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.432 22:57:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:18.432 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.432 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:18.432 Malloc1 00:27:18.432 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.432 22:57:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:18.432 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.432 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:18.432 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.432 22:57:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:18.432 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.432 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:18.432 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.432 22:57:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:18.432 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.432 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:18.432 [2024-07-26 22:57:10.787403] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:18.432 22:57:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.432 22:57:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3619666 00:27:18.432 22:57:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:18.433 22:57:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:27:18.433 EAL: No free 2048 kB hugepages reported on node 1 00:27:20.335 22:57:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:27:20.335 22:57:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.335 22:57:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:20.335 22:57:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.335 22:57:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:27:20.335 "tick_rate": 2700000000, 00:27:20.335 "poll_groups": [ 00:27:20.335 { 00:27:20.335 "name": "nvmf_tgt_poll_group_000", 00:27:20.335 "admin_qpairs": 1, 00:27:20.335 "io_qpairs": 1, 00:27:20.335 "current_admin_qpairs": 1, 00:27:20.335 "current_io_qpairs": 1, 00:27:20.335 "pending_bdev_io": 0, 00:27:20.335 "completed_nvme_io": 19430, 00:27:20.335 "transports": [ 00:27:20.335 { 00:27:20.335 "trtype": "TCP" 00:27:20.335 } 00:27:20.335 ] 00:27:20.335 }, 00:27:20.335 { 00:27:20.335 "name": "nvmf_tgt_poll_group_001", 00:27:20.335 "admin_qpairs": 0, 00:27:20.335 "io_qpairs": 1, 00:27:20.335 "current_admin_qpairs": 0, 00:27:20.335 "current_io_qpairs": 1, 00:27:20.335 "pending_bdev_io": 0, 00:27:20.335 "completed_nvme_io": 19959, 00:27:20.335 "transports": [ 00:27:20.335 { 00:27:20.335 "trtype": "TCP" 00:27:20.335 } 00:27:20.335 ] 00:27:20.335 }, 00:27:20.335 { 00:27:20.335 "name": "nvmf_tgt_poll_group_002", 00:27:20.335 "admin_qpairs": 0, 00:27:20.335 "io_qpairs": 1, 00:27:20.335 "current_admin_qpairs": 0, 00:27:20.335 "current_io_qpairs": 1, 00:27:20.335 "pending_bdev_io": 0, 00:27:20.335 "completed_nvme_io": 20283, 00:27:20.335 "transports": [ 00:27:20.335 { 00:27:20.335 "trtype": "TCP" 00:27:20.335 } 00:27:20.335 ] 00:27:20.335 }, 00:27:20.335 { 00:27:20.335 "name": "nvmf_tgt_poll_group_003", 00:27:20.335 "admin_qpairs": 0, 00:27:20.335 "io_qpairs": 1, 00:27:20.335 "current_admin_qpairs": 0, 00:27:20.335 "current_io_qpairs": 1, 00:27:20.335 "pending_bdev_io": 0, 00:27:20.335 "completed_nvme_io": 17729, 00:27:20.335 "transports": [ 00:27:20.335 { 00:27:20.335 "trtype": "TCP" 00:27:20.335 } 00:27:20.335 ] 00:27:20.335 } 00:27:20.335 ] 00:27:20.335 }' 00:27:20.335 22:57:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:20.335 22:57:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:27:20.593 22:57:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:27:20.593 22:57:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:27:20.593 22:57:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3619666 00:27:28.711 Initializing NVMe Controllers 00:27:28.711 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:28.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:28.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:28.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:28.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:28.711 Initialization complete. Launching workers. 00:27:28.711 ======================================================== 00:27:28.711 Latency(us) 00:27:28.711 Device Information : IOPS MiB/s Average min max 00:27:28.711 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9279.52 36.25 6896.90 4159.85 9452.18 00:27:28.711 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10546.85 41.20 6068.88 2077.34 9179.09 00:27:28.711 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10672.05 41.69 5997.31 2293.16 9482.96 00:27:28.711 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10203.67 39.86 6274.10 2652.32 9747.95 00:27:28.711 ======================================================== 00:27:28.711 Total : 40702.09 158.99 6290.34 2077.34 9747.95 00:27:28.711 00:27:28.711 22:57:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:27:28.711 22:57:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:28.711 22:57:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:28.711 22:57:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:28.711 22:57:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:28.711 22:57:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:28.711 22:57:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:28.711 rmmod nvme_tcp 00:27:28.711 rmmod nvme_fabrics 00:27:28.711 rmmod nvme_keyring 00:27:28.711 22:57:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:28.711 22:57:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:28.711 22:57:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:28.711 22:57:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3619510 ']' 00:27:28.711 22:57:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3619510 00:27:28.711 22:57:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 3619510 ']' 00:27:28.711 22:57:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 3619510 00:27:28.711 22:57:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:27:28.711 22:57:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:28.711 22:57:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3619510 00:27:28.711 22:57:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:28.711 22:57:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:28.711 22:57:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3619510' 00:27:28.711 killing process with pid 3619510 00:27:28.711 22:57:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 3619510 00:27:28.711 22:57:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 3619510 00:27:28.972 22:57:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:28.972 22:57:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:28.972 22:57:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:28.972 22:57:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:28.972 22:57:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:28.972 22:57:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:28.972 22:57:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:28.972 22:57:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:30.877 22:57:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:30.877 22:57:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:27:30.877 22:57:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:31.444 22:57:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:33.346 22:57:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:38.669 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:38.669 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:38.669 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:38.669 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:38.669 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:38.670 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:38.670 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:38.670 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:38.670 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:38.670 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:38.670 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:38.670 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:38.670 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:38.670 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:38.670 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:38.670 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:38.670 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:38.670 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:38.670 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:38.670 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:38.670 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:38.670 22:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:38.670 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:38.670 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:27:38.670 00:27:38.670 --- 10.0.0.2 ping statistics --- 00:27:38.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.670 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:27:38.670 22:57:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:38.670 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:38.670 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:27:38.670 00:27:38.670 --- 10.0.0.1 ping statistics --- 00:27:38.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.670 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:27:38.670 22:57:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:38.670 22:57:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:38.670 22:57:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:38.670 22:57:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:38.670 22:57:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:38.670 22:57:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:38.670 22:57:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:38.670 22:57:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:38.670 22:57:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:38.670 22:57:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:27:38.670 22:57:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:38.670 22:57:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:38.670 22:57:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:38.670 net.core.busy_poll = 1 00:27:38.670 22:57:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:38.670 net.core.busy_read = 1 00:27:38.670 22:57:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:38.670 22:57:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:38.670 22:57:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:38.670 22:57:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:38.670 22:57:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:38.670 22:57:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:38.670 22:57:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:38.670 22:57:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:38.670 22:57:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:38.670 22:57:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3622287 00:27:38.670 22:57:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:38.670 22:57:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3622287 00:27:38.670 22:57:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 3622287 ']' 00:27:38.670 22:57:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:38.670 22:57:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:38.670 22:57:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:38.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:38.670 22:57:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:38.670 22:57:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:38.929 [2024-07-26 22:57:31.214944] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:38.929 [2024-07-26 22:57:31.215019] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:38.929 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.929 [2024-07-26 22:57:31.286845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:38.929 [2024-07-26 22:57:31.379101] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:38.929 [2024-07-26 22:57:31.379156] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:38.929 [2024-07-26 22:57:31.379173] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:38.929 [2024-07-26 22:57:31.379186] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:38.929 [2024-07-26 22:57:31.379199] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:38.929 [2024-07-26 22:57:31.379258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:38.929 [2024-07-26 22:57:31.379341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:38.929 [2024-07-26 22:57:31.379406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:38.929 [2024-07-26 22:57:31.379408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.929 22:57:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:38.929 22:57:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:27:38.929 22:57:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:38.929 22:57:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:38.929 22:57:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.188 [2024-07-26 22:57:31.605029] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.188 Malloc1 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.188 [2024-07-26 22:57:31.658170] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3622315 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:27:39.188 22:57:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:39.450 EAL: No free 2048 kB hugepages reported on node 1 00:27:41.353 22:57:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:27:41.353 22:57:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.353 22:57:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:41.353 22:57:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.353 22:57:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:27:41.353 "tick_rate": 2700000000, 00:27:41.353 "poll_groups": [ 00:27:41.353 { 00:27:41.353 "name": "nvmf_tgt_poll_group_000", 00:27:41.353 "admin_qpairs": 1, 00:27:41.353 "io_qpairs": 3, 00:27:41.353 "current_admin_qpairs": 1, 00:27:41.353 "current_io_qpairs": 3, 00:27:41.353 "pending_bdev_io": 0, 00:27:41.353 "completed_nvme_io": 26628, 00:27:41.353 "transports": [ 00:27:41.353 { 00:27:41.353 "trtype": "TCP" 00:27:41.353 } 00:27:41.353 ] 00:27:41.353 }, 00:27:41.353 { 00:27:41.353 "name": "nvmf_tgt_poll_group_001", 00:27:41.353 "admin_qpairs": 0, 00:27:41.353 "io_qpairs": 1, 00:27:41.353 "current_admin_qpairs": 0, 00:27:41.353 "current_io_qpairs": 1, 00:27:41.353 "pending_bdev_io": 0, 00:27:41.353 "completed_nvme_io": 25196, 00:27:41.353 "transports": [ 00:27:41.353 { 00:27:41.353 "trtype": "TCP" 00:27:41.353 } 00:27:41.353 ] 00:27:41.353 }, 00:27:41.353 { 00:27:41.353 "name": "nvmf_tgt_poll_group_002", 00:27:41.353 "admin_qpairs": 0, 00:27:41.353 "io_qpairs": 0, 00:27:41.353 "current_admin_qpairs": 0, 00:27:41.353 "current_io_qpairs": 0, 00:27:41.353 "pending_bdev_io": 0, 00:27:41.353 "completed_nvme_io": 0, 00:27:41.353 "transports": [ 00:27:41.353 { 00:27:41.353 "trtype": "TCP" 00:27:41.353 } 00:27:41.353 ] 00:27:41.353 }, 00:27:41.353 { 00:27:41.353 "name": "nvmf_tgt_poll_group_003", 00:27:41.353 "admin_qpairs": 0, 00:27:41.353 "io_qpairs": 0, 00:27:41.353 "current_admin_qpairs": 0, 00:27:41.353 "current_io_qpairs": 0, 00:27:41.353 "pending_bdev_io": 0, 00:27:41.353 "completed_nvme_io": 0, 00:27:41.353 "transports": [ 00:27:41.353 { 00:27:41.353 "trtype": "TCP" 00:27:41.353 } 00:27:41.353 ] 00:27:41.353 } 00:27:41.353 ] 00:27:41.353 }' 00:27:41.353 22:57:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:41.353 22:57:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:27:41.353 22:57:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:27:41.353 22:57:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:27:41.353 22:57:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3622315 00:27:49.468 Initializing NVMe Controllers 00:27:49.468 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:49.468 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:49.468 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:49.468 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:49.468 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:49.468 Initialization complete. Launching workers. 00:27:49.468 ======================================================== 00:27:49.468 Latency(us) 00:27:49.468 Device Information : IOPS MiB/s Average min max 00:27:49.468 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13157.50 51.40 4864.25 1445.64 45889.35 00:27:49.468 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4423.80 17.28 14504.55 1769.51 61368.44 00:27:49.468 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4896.80 19.13 13070.54 1762.97 61874.19 00:27:49.468 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4661.90 18.21 13772.39 2235.99 60829.96 00:27:49.468 ======================================================== 00:27:49.468 Total : 27140.00 106.02 9446.42 1445.64 61874.19 00:27:49.468 00:27:49.468 22:57:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:27:49.468 22:57:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:49.468 22:57:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:49.468 22:57:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:49.468 22:57:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:49.468 22:57:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:49.468 22:57:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:49.468 rmmod nvme_tcp 00:27:49.468 rmmod nvme_fabrics 00:27:49.468 rmmod nvme_keyring 00:27:49.468 22:57:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:49.468 22:57:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:49.468 22:57:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:49.468 22:57:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3622287 ']' 00:27:49.468 22:57:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3622287 00:27:49.468 22:57:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 3622287 ']' 00:27:49.468 22:57:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 3622287 00:27:49.468 22:57:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:27:49.468 22:57:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:49.468 22:57:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3622287 00:27:49.468 22:57:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:49.468 22:57:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:49.468 22:57:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3622287' 00:27:49.468 killing process with pid 3622287 00:27:49.468 22:57:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 3622287 00:27:49.468 22:57:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 3622287 00:27:49.727 22:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:49.727 22:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:49.727 22:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:49.727 22:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:49.727 22:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:49.727 22:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.727 22:57:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:49.727 22:57:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:53.019 22:57:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:53.019 22:57:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:53.019 00:27:53.019 real 0m44.685s 00:27:53.019 user 2m30.569s 00:27:53.019 sys 0m12.785s 00:27:53.019 22:57:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:53.019 22:57:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:53.019 ************************************ 00:27:53.019 END TEST nvmf_perf_adq 00:27:53.019 ************************************ 00:27:53.019 22:57:45 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:53.019 22:57:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:53.019 22:57:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:53.019 22:57:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:53.019 ************************************ 00:27:53.019 START TEST nvmf_shutdown 00:27:53.019 ************************************ 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:53.019 * Looking for test storage... 00:27:53.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:53.019 22:57:45 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:53.019 ************************************ 00:27:53.019 START TEST nvmf_shutdown_tc1 00:27:53.019 ************************************ 00:27:53.020 22:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:27:53.020 22:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:27:53.020 22:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:53.020 22:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:53.020 22:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:53.020 22:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:53.020 22:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:53.020 22:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:53.020 22:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.020 22:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:53.020 22:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:53.020 22:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:53.020 22:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:53.020 22:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:53.020 22:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:54.922 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:54.922 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:54.922 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:54.922 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:54.922 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:54.922 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:27:54.922 00:27:54.922 --- 10.0.0.2 ping statistics --- 00:27:54.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.922 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:27:54.922 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:54.922 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:54.923 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:27:54.923 00:27:54.923 --- 10.0.0.1 ping statistics --- 00:27:54.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.923 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:27:54.923 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:54.923 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:27:54.923 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:54.923 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:54.923 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:54.923 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:54.923 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:54.923 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:54.923 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:54.923 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:54.923 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:54.923 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:54.923 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:54.923 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3625606 00:27:54.923 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:54.923 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3625606 00:27:54.923 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 3625606 ']' 00:27:54.923 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:54.923 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:54.923 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:54.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:54.923 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:54.923 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:55.181 [2024-07-26 22:57:47.431605] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:55.181 [2024-07-26 22:57:47.431691] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:55.181 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.181 [2024-07-26 22:57:47.497624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:55.181 [2024-07-26 22:57:47.582068] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:55.181 [2024-07-26 22:57:47.582120] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:55.181 [2024-07-26 22:57:47.582141] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:55.181 [2024-07-26 22:57:47.582153] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:55.181 [2024-07-26 22:57:47.582164] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:55.181 [2024-07-26 22:57:47.582308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:55.181 [2024-07-26 22:57:47.582373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:55.181 [2024-07-26 22:57:47.582423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:55.181 [2024-07-26 22:57:47.582426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:55.439 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:55.439 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:27:55.439 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:55.439 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:55.439 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:55.439 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:55.439 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:55.440 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.440 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:55.440 [2024-07-26 22:57:47.720600] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:55.440 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.440 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:55.440 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:55.440 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:55.440 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:55.440 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:55.440 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:55.440 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:55.440 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:55.440 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:55.440 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:55.440 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:55.440 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:55.440 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:55.440 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:55.440 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:55.440 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:55.440 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:55.440 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:55.440 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:55.440 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:55.440 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:55.440 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:55.440 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:55.440 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:55.440 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:55.440 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:55.440 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.440 22:57:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:55.440 Malloc1 00:27:55.440 [2024-07-26 22:57:47.795476] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:55.440 Malloc2 00:27:55.440 Malloc3 00:27:55.440 Malloc4 00:27:55.700 Malloc5 00:27:55.700 Malloc6 00:27:55.700 Malloc7 00:27:55.700 Malloc8 00:27:55.700 Malloc9 00:27:55.700 Malloc10 00:27:55.959 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.959 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:55.959 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:55.959 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:55.959 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3625781 00:27:55.959 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3625781 /var/tmp/bdevperf.sock 00:27:55.959 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 3625781 ']' 00:27:55.959 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:55.959 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:55.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.960 { 00:27:55.960 "params": { 00:27:55.960 "name": "Nvme$subsystem", 00:27:55.960 "trtype": "$TEST_TRANSPORT", 00:27:55.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.960 "adrfam": "ipv4", 00:27:55.960 "trsvcid": "$NVMF_PORT", 00:27:55.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.960 "hdgst": ${hdgst:-false}, 00:27:55.960 "ddgst": ${ddgst:-false} 00:27:55.960 }, 00:27:55.960 "method": "bdev_nvme_attach_controller" 00:27:55.960 } 00:27:55.960 EOF 00:27:55.960 )") 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.960 { 00:27:55.960 "params": { 00:27:55.960 "name": "Nvme$subsystem", 00:27:55.960 "trtype": "$TEST_TRANSPORT", 00:27:55.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.960 "adrfam": "ipv4", 00:27:55.960 "trsvcid": "$NVMF_PORT", 00:27:55.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.960 "hdgst": ${hdgst:-false}, 00:27:55.960 "ddgst": ${ddgst:-false} 00:27:55.960 }, 00:27:55.960 "method": "bdev_nvme_attach_controller" 00:27:55.960 } 00:27:55.960 EOF 00:27:55.960 )") 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.960 { 00:27:55.960 "params": { 00:27:55.960 "name": "Nvme$subsystem", 00:27:55.960 "trtype": "$TEST_TRANSPORT", 00:27:55.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.960 "adrfam": "ipv4", 00:27:55.960 "trsvcid": "$NVMF_PORT", 00:27:55.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.960 "hdgst": ${hdgst:-false}, 00:27:55.960 "ddgst": ${ddgst:-false} 00:27:55.960 }, 00:27:55.960 "method": "bdev_nvme_attach_controller" 00:27:55.960 } 00:27:55.960 EOF 00:27:55.960 )") 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.960 { 00:27:55.960 "params": { 00:27:55.960 "name": "Nvme$subsystem", 00:27:55.960 "trtype": "$TEST_TRANSPORT", 00:27:55.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.960 "adrfam": "ipv4", 00:27:55.960 "trsvcid": "$NVMF_PORT", 00:27:55.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.960 "hdgst": ${hdgst:-false}, 00:27:55.960 "ddgst": ${ddgst:-false} 00:27:55.960 }, 00:27:55.960 "method": "bdev_nvme_attach_controller" 00:27:55.960 } 00:27:55.960 EOF 00:27:55.960 )") 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.960 { 00:27:55.960 "params": { 00:27:55.960 "name": "Nvme$subsystem", 00:27:55.960 "trtype": "$TEST_TRANSPORT", 00:27:55.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.960 "adrfam": "ipv4", 00:27:55.960 "trsvcid": "$NVMF_PORT", 00:27:55.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.960 "hdgst": ${hdgst:-false}, 00:27:55.960 "ddgst": ${ddgst:-false} 00:27:55.960 }, 00:27:55.960 "method": "bdev_nvme_attach_controller" 00:27:55.960 } 00:27:55.960 EOF 00:27:55.960 )") 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.960 { 00:27:55.960 "params": { 00:27:55.960 "name": "Nvme$subsystem", 00:27:55.960 "trtype": "$TEST_TRANSPORT", 00:27:55.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.960 "adrfam": "ipv4", 00:27:55.960 "trsvcid": "$NVMF_PORT", 00:27:55.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.960 "hdgst": ${hdgst:-false}, 00:27:55.960 "ddgst": ${ddgst:-false} 00:27:55.960 }, 00:27:55.960 "method": "bdev_nvme_attach_controller" 00:27:55.960 } 00:27:55.960 EOF 00:27:55.960 )") 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.960 { 00:27:55.960 "params": { 00:27:55.960 "name": "Nvme$subsystem", 00:27:55.960 "trtype": "$TEST_TRANSPORT", 00:27:55.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.960 "adrfam": "ipv4", 00:27:55.960 "trsvcid": "$NVMF_PORT", 00:27:55.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.960 "hdgst": ${hdgst:-false}, 00:27:55.960 "ddgst": ${ddgst:-false} 00:27:55.960 }, 00:27:55.960 "method": "bdev_nvme_attach_controller" 00:27:55.960 } 00:27:55.960 EOF 00:27:55.960 )") 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.960 { 00:27:55.960 "params": { 00:27:55.960 "name": "Nvme$subsystem", 00:27:55.960 "trtype": "$TEST_TRANSPORT", 00:27:55.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.960 "adrfam": "ipv4", 00:27:55.960 "trsvcid": "$NVMF_PORT", 00:27:55.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.960 "hdgst": ${hdgst:-false}, 00:27:55.960 "ddgst": ${ddgst:-false} 00:27:55.960 }, 00:27:55.960 "method": "bdev_nvme_attach_controller" 00:27:55.960 } 00:27:55.960 EOF 00:27:55.960 )") 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.960 { 00:27:55.960 "params": { 00:27:55.960 "name": "Nvme$subsystem", 00:27:55.960 "trtype": "$TEST_TRANSPORT", 00:27:55.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.960 "adrfam": "ipv4", 00:27:55.960 "trsvcid": "$NVMF_PORT", 00:27:55.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.960 "hdgst": ${hdgst:-false}, 00:27:55.960 "ddgst": ${ddgst:-false} 00:27:55.960 }, 00:27:55.960 "method": "bdev_nvme_attach_controller" 00:27:55.960 } 00:27:55.960 EOF 00:27:55.960 )") 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.960 { 00:27:55.960 "params": { 00:27:55.960 "name": "Nvme$subsystem", 00:27:55.960 "trtype": "$TEST_TRANSPORT", 00:27:55.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.960 "adrfam": "ipv4", 00:27:55.960 "trsvcid": "$NVMF_PORT", 00:27:55.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.960 "hdgst": ${hdgst:-false}, 00:27:55.960 "ddgst": ${ddgst:-false} 00:27:55.960 }, 00:27:55.960 "method": "bdev_nvme_attach_controller" 00:27:55.960 } 00:27:55.960 EOF 00:27:55.960 )") 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:55.960 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:55.961 22:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:55.961 "params": { 00:27:55.961 "name": "Nvme1", 00:27:55.961 "trtype": "tcp", 00:27:55.961 "traddr": "10.0.0.2", 00:27:55.961 "adrfam": "ipv4", 00:27:55.961 "trsvcid": "4420", 00:27:55.961 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:55.961 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:55.961 "hdgst": false, 00:27:55.961 "ddgst": false 00:27:55.961 }, 00:27:55.961 "method": "bdev_nvme_attach_controller" 00:27:55.961 },{ 00:27:55.961 "params": { 00:27:55.961 "name": "Nvme2", 00:27:55.961 "trtype": "tcp", 00:27:55.961 "traddr": "10.0.0.2", 00:27:55.961 "adrfam": "ipv4", 00:27:55.961 "trsvcid": "4420", 00:27:55.961 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:55.961 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:55.961 "hdgst": false, 00:27:55.961 "ddgst": false 00:27:55.961 }, 00:27:55.961 "method": "bdev_nvme_attach_controller" 00:27:55.961 },{ 00:27:55.961 "params": { 00:27:55.961 "name": "Nvme3", 00:27:55.961 "trtype": "tcp", 00:27:55.961 "traddr": "10.0.0.2", 00:27:55.961 "adrfam": "ipv4", 00:27:55.961 "trsvcid": "4420", 00:27:55.961 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:55.961 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:55.961 "hdgst": false, 00:27:55.961 "ddgst": false 00:27:55.961 }, 00:27:55.961 "method": "bdev_nvme_attach_controller" 00:27:55.961 },{ 00:27:55.961 "params": { 00:27:55.961 "name": "Nvme4", 00:27:55.961 "trtype": "tcp", 00:27:55.961 "traddr": "10.0.0.2", 00:27:55.961 "adrfam": "ipv4", 00:27:55.961 "trsvcid": "4420", 00:27:55.961 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:55.961 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:55.961 "hdgst": false, 00:27:55.961 "ddgst": false 00:27:55.961 }, 00:27:55.961 "method": "bdev_nvme_attach_controller" 00:27:55.961 },{ 00:27:55.961 "params": { 00:27:55.961 "name": "Nvme5", 00:27:55.961 "trtype": "tcp", 00:27:55.961 "traddr": "10.0.0.2", 00:27:55.961 "adrfam": "ipv4", 00:27:55.961 "trsvcid": "4420", 00:27:55.961 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:55.961 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:55.961 "hdgst": false, 00:27:55.961 "ddgst": false 00:27:55.961 }, 00:27:55.961 "method": "bdev_nvme_attach_controller" 00:27:55.961 },{ 00:27:55.961 "params": { 00:27:55.961 "name": "Nvme6", 00:27:55.961 "trtype": "tcp", 00:27:55.961 "traddr": "10.0.0.2", 00:27:55.961 "adrfam": "ipv4", 00:27:55.961 "trsvcid": "4420", 00:27:55.961 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:55.961 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:55.961 "hdgst": false, 00:27:55.961 "ddgst": false 00:27:55.961 }, 00:27:55.961 "method": "bdev_nvme_attach_controller" 00:27:55.961 },{ 00:27:55.961 "params": { 00:27:55.961 "name": "Nvme7", 00:27:55.961 "trtype": "tcp", 00:27:55.961 "traddr": "10.0.0.2", 00:27:55.961 "adrfam": "ipv4", 00:27:55.961 "trsvcid": "4420", 00:27:55.961 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:55.961 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:55.961 "hdgst": false, 00:27:55.961 "ddgst": false 00:27:55.961 }, 00:27:55.961 "method": "bdev_nvme_attach_controller" 00:27:55.961 },{ 00:27:55.961 "params": { 00:27:55.961 "name": "Nvme8", 00:27:55.961 "trtype": "tcp", 00:27:55.961 "traddr": "10.0.0.2", 00:27:55.961 "adrfam": "ipv4", 00:27:55.961 "trsvcid": "4420", 00:27:55.961 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:55.961 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:55.961 "hdgst": false, 00:27:55.961 "ddgst": false 00:27:55.961 }, 00:27:55.961 "method": "bdev_nvme_attach_controller" 00:27:55.961 },{ 00:27:55.961 "params": { 00:27:55.961 "name": "Nvme9", 00:27:55.961 "trtype": "tcp", 00:27:55.961 "traddr": "10.0.0.2", 00:27:55.961 "adrfam": "ipv4", 00:27:55.961 "trsvcid": "4420", 00:27:55.961 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:55.961 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:55.961 "hdgst": false, 00:27:55.961 "ddgst": false 00:27:55.961 }, 00:27:55.961 "method": "bdev_nvme_attach_controller" 00:27:55.961 },{ 00:27:55.961 "params": { 00:27:55.961 "name": "Nvme10", 00:27:55.961 "trtype": "tcp", 00:27:55.961 "traddr": "10.0.0.2", 00:27:55.961 "adrfam": "ipv4", 00:27:55.961 "trsvcid": "4420", 00:27:55.961 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:55.961 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:55.961 "hdgst": false, 00:27:55.961 "ddgst": false 00:27:55.961 }, 00:27:55.961 "method": "bdev_nvme_attach_controller" 00:27:55.961 }' 00:27:55.961 [2024-07-26 22:57:48.294265] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:55.961 [2024-07-26 22:57:48.294342] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:55.961 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.961 [2024-07-26 22:57:48.358765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.961 [2024-07-26 22:57:48.445400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.865 22:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:57.865 22:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:27:57.865 22:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:57.865 22:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.865 22:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:57.865 22:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.865 22:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3625781 00:27:57.865 22:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:57.865 22:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:27:58.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3625781 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:58.805 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3625606 00:27:58.805 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:58.805 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:58.805 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:58.805 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:58.805 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:58.805 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:58.805 { 00:27:58.805 "params": { 00:27:58.805 "name": "Nvme$subsystem", 00:27:58.805 "trtype": "$TEST_TRANSPORT", 00:27:58.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.805 "adrfam": "ipv4", 00:27:58.805 "trsvcid": "$NVMF_PORT", 00:27:58.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.805 "hdgst": ${hdgst:-false}, 00:27:58.805 "ddgst": ${ddgst:-false} 00:27:58.805 }, 00:27:58.805 "method": "bdev_nvme_attach_controller" 00:27:58.805 } 00:27:58.805 EOF 00:27:58.805 )") 00:27:58.805 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:58.805 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:58.805 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:58.805 { 00:27:58.805 "params": { 00:27:58.805 "name": "Nvme$subsystem", 00:27:58.805 "trtype": "$TEST_TRANSPORT", 00:27:58.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.805 "adrfam": "ipv4", 00:27:58.805 "trsvcid": "$NVMF_PORT", 00:27:58.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.805 "hdgst": ${hdgst:-false}, 00:27:58.805 "ddgst": ${ddgst:-false} 00:27:58.805 }, 00:27:58.805 "method": "bdev_nvme_attach_controller" 00:27:58.805 } 00:27:58.805 EOF 00:27:58.805 )") 00:27:58.805 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:58.805 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:58.805 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:58.805 { 00:27:58.805 "params": { 00:27:58.805 "name": "Nvme$subsystem", 00:27:58.805 "trtype": "$TEST_TRANSPORT", 00:27:58.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.805 "adrfam": "ipv4", 00:27:58.805 "trsvcid": "$NVMF_PORT", 00:27:58.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.805 "hdgst": ${hdgst:-false}, 00:27:58.805 "ddgst": ${ddgst:-false} 00:27:58.805 }, 00:27:58.805 "method": "bdev_nvme_attach_controller" 00:27:58.805 } 00:27:58.805 EOF 00:27:58.805 )") 00:27:58.805 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:58.805 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:58.805 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:58.805 { 00:27:58.805 "params": { 00:27:58.805 "name": "Nvme$subsystem", 00:27:58.805 "trtype": "$TEST_TRANSPORT", 00:27:58.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.805 "adrfam": "ipv4", 00:27:58.805 "trsvcid": "$NVMF_PORT", 00:27:58.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.805 "hdgst": ${hdgst:-false}, 00:27:58.805 "ddgst": ${ddgst:-false} 00:27:58.805 }, 00:27:58.805 "method": "bdev_nvme_attach_controller" 00:27:58.805 } 00:27:58.805 EOF 00:27:58.805 )") 00:27:58.805 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:58.805 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:58.805 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:58.805 { 00:27:58.805 "params": { 00:27:58.805 "name": "Nvme$subsystem", 00:27:58.805 "trtype": "$TEST_TRANSPORT", 00:27:58.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.805 "adrfam": "ipv4", 00:27:58.805 "trsvcid": "$NVMF_PORT", 00:27:58.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.805 "hdgst": ${hdgst:-false}, 00:27:58.805 "ddgst": ${ddgst:-false} 00:27:58.805 }, 00:27:58.805 "method": "bdev_nvme_attach_controller" 00:27:58.805 } 00:27:58.805 EOF 00:27:58.805 )") 00:27:58.805 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:58.805 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:58.805 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:58.805 { 00:27:58.805 "params": { 00:27:58.805 "name": "Nvme$subsystem", 00:27:58.805 "trtype": "$TEST_TRANSPORT", 00:27:58.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.805 "adrfam": "ipv4", 00:27:58.805 "trsvcid": "$NVMF_PORT", 00:27:58.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.805 "hdgst": ${hdgst:-false}, 00:27:58.805 "ddgst": ${ddgst:-false} 00:27:58.805 }, 00:27:58.805 "method": "bdev_nvme_attach_controller" 00:27:58.805 } 00:27:58.805 EOF 00:27:58.805 )") 00:27:58.805 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:58.805 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:58.805 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:58.805 { 00:27:58.805 "params": { 00:27:58.805 "name": "Nvme$subsystem", 00:27:58.805 "trtype": "$TEST_TRANSPORT", 00:27:58.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.805 "adrfam": "ipv4", 00:27:58.805 "trsvcid": "$NVMF_PORT", 00:27:58.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.805 "hdgst": ${hdgst:-false}, 00:27:58.805 "ddgst": ${ddgst:-false} 00:27:58.805 }, 00:27:58.805 "method": "bdev_nvme_attach_controller" 00:27:58.805 } 00:27:58.805 EOF 00:27:58.805 )") 00:27:58.805 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:58.805 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:58.805 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:58.805 { 00:27:58.805 "params": { 00:27:58.805 "name": "Nvme$subsystem", 00:27:58.805 "trtype": "$TEST_TRANSPORT", 00:27:58.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.805 "adrfam": "ipv4", 00:27:58.805 "trsvcid": "$NVMF_PORT", 00:27:58.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.805 "hdgst": ${hdgst:-false}, 00:27:58.805 "ddgst": ${ddgst:-false} 00:27:58.805 }, 00:27:58.806 "method": "bdev_nvme_attach_controller" 00:27:58.806 } 00:27:58.806 EOF 00:27:58.806 )") 00:27:58.806 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:58.806 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:58.806 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:58.806 { 00:27:58.806 "params": { 00:27:58.806 "name": "Nvme$subsystem", 00:27:58.806 "trtype": "$TEST_TRANSPORT", 00:27:58.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.806 "adrfam": "ipv4", 00:27:58.806 "trsvcid": "$NVMF_PORT", 00:27:58.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.806 "hdgst": ${hdgst:-false}, 00:27:58.806 "ddgst": ${ddgst:-false} 00:27:58.806 }, 00:27:58.806 "method": "bdev_nvme_attach_controller" 00:27:58.806 } 00:27:58.806 EOF 00:27:58.806 )") 00:27:58.806 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:58.806 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:58.806 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:58.806 { 00:27:58.806 "params": { 00:27:58.806 "name": "Nvme$subsystem", 00:27:58.806 "trtype": "$TEST_TRANSPORT", 00:27:58.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.806 "adrfam": "ipv4", 00:27:58.806 "trsvcid": "$NVMF_PORT", 00:27:58.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.806 "hdgst": ${hdgst:-false}, 00:27:58.806 "ddgst": ${ddgst:-false} 00:27:58.806 }, 00:27:58.806 "method": "bdev_nvme_attach_controller" 00:27:58.806 } 00:27:58.806 EOF 00:27:58.806 )") 00:27:58.806 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:58.806 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:58.806 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:58.806 22:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:58.806 "params": { 00:27:58.806 "name": "Nvme1", 00:27:58.806 "trtype": "tcp", 00:27:58.806 "traddr": "10.0.0.2", 00:27:58.806 "adrfam": "ipv4", 00:27:58.806 "trsvcid": "4420", 00:27:58.806 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:58.806 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:58.806 "hdgst": false, 00:27:58.806 "ddgst": false 00:27:58.806 }, 00:27:58.806 "method": "bdev_nvme_attach_controller" 00:27:58.806 },{ 00:27:58.806 "params": { 00:27:58.806 "name": "Nvme2", 00:27:58.806 "trtype": "tcp", 00:27:58.806 "traddr": "10.0.0.2", 00:27:58.806 "adrfam": "ipv4", 00:27:58.806 "trsvcid": "4420", 00:27:58.806 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:58.806 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:58.806 "hdgst": false, 00:27:58.806 "ddgst": false 00:27:58.806 }, 00:27:58.806 "method": "bdev_nvme_attach_controller" 00:27:58.806 },{ 00:27:58.806 "params": { 00:27:58.806 "name": "Nvme3", 00:27:58.806 "trtype": "tcp", 00:27:58.806 "traddr": "10.0.0.2", 00:27:58.806 "adrfam": "ipv4", 00:27:58.806 "trsvcid": "4420", 00:27:58.806 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:58.806 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:58.806 "hdgst": false, 00:27:58.806 "ddgst": false 00:27:58.806 }, 00:27:58.806 "method": "bdev_nvme_attach_controller" 00:27:58.806 },{ 00:27:58.806 "params": { 00:27:58.806 "name": "Nvme4", 00:27:58.806 "trtype": "tcp", 00:27:58.806 "traddr": "10.0.0.2", 00:27:58.806 "adrfam": "ipv4", 00:27:58.806 "trsvcid": "4420", 00:27:58.806 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:58.806 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:58.806 "hdgst": false, 00:27:58.806 "ddgst": false 00:27:58.806 }, 00:27:58.806 "method": "bdev_nvme_attach_controller" 00:27:58.806 },{ 00:27:58.806 "params": { 00:27:58.806 "name": "Nvme5", 00:27:58.806 "trtype": "tcp", 00:27:58.806 "traddr": "10.0.0.2", 00:27:58.806 "adrfam": "ipv4", 00:27:58.806 "trsvcid": "4420", 00:27:58.806 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:58.806 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:58.806 "hdgst": false, 00:27:58.806 "ddgst": false 00:27:58.806 }, 00:27:58.806 "method": "bdev_nvme_attach_controller" 00:27:58.806 },{ 00:27:58.806 "params": { 00:27:58.806 "name": "Nvme6", 00:27:58.806 "trtype": "tcp", 00:27:58.806 "traddr": "10.0.0.2", 00:27:58.806 "adrfam": "ipv4", 00:27:58.806 "trsvcid": "4420", 00:27:58.806 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:58.806 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:58.806 "hdgst": false, 00:27:58.806 "ddgst": false 00:27:58.806 }, 00:27:58.806 "method": "bdev_nvme_attach_controller" 00:27:58.806 },{ 00:27:58.806 "params": { 00:27:58.806 "name": "Nvme7", 00:27:58.806 "trtype": "tcp", 00:27:58.806 "traddr": "10.0.0.2", 00:27:58.806 "adrfam": "ipv4", 00:27:58.806 "trsvcid": "4420", 00:27:58.806 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:58.806 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:58.806 "hdgst": false, 00:27:58.806 "ddgst": false 00:27:58.806 }, 00:27:58.806 "method": "bdev_nvme_attach_controller" 00:27:58.806 },{ 00:27:58.806 "params": { 00:27:58.806 "name": "Nvme8", 00:27:58.806 "trtype": "tcp", 00:27:58.806 "traddr": "10.0.0.2", 00:27:58.806 "adrfam": "ipv4", 00:27:58.806 "trsvcid": "4420", 00:27:58.806 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:58.806 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:58.806 "hdgst": false, 00:27:58.806 "ddgst": false 00:27:58.806 }, 00:27:58.806 "method": "bdev_nvme_attach_controller" 00:27:58.806 },{ 00:27:58.806 "params": { 00:27:58.806 "name": "Nvme9", 00:27:58.806 "trtype": "tcp", 00:27:58.806 "traddr": "10.0.0.2", 00:27:58.806 "adrfam": "ipv4", 00:27:58.806 "trsvcid": "4420", 00:27:58.806 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:58.806 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:58.806 "hdgst": false, 00:27:58.806 "ddgst": false 00:27:58.806 }, 00:27:58.806 "method": "bdev_nvme_attach_controller" 00:27:58.806 },{ 00:27:58.806 "params": { 00:27:58.806 "name": "Nvme10", 00:27:58.806 "trtype": "tcp", 00:27:58.806 "traddr": "10.0.0.2", 00:27:58.806 "adrfam": "ipv4", 00:27:58.806 "trsvcid": "4420", 00:27:58.806 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:58.806 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:58.806 "hdgst": false, 00:27:58.806 "ddgst": false 00:27:58.806 }, 00:27:58.806 "method": "bdev_nvme_attach_controller" 00:27:58.806 }' 00:27:58.806 [2024-07-26 22:57:51.301437] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:58.806 [2024-07-26 22:57:51.301529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3626087 ] 00:27:59.064 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.064 [2024-07-26 22:57:51.368726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.064 [2024-07-26 22:57:51.456579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.442 Running I/O for 1 seconds... 00:28:01.816 00:28:01.816 Latency(us) 00:28:01.816 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:01.816 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.816 Verification LBA range: start 0x0 length 0x400 00:28:01.816 Nvme1n1 : 1.10 241.62 15.10 0.00 0.00 260094.94 7961.41 233016.89 00:28:01.816 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.816 Verification LBA range: start 0x0 length 0x400 00:28:01.816 Nvme2n1 : 1.09 235.27 14.70 0.00 0.00 264591.93 19806.44 253211.69 00:28:01.816 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.816 Verification LBA range: start 0x0 length 0x400 00:28:01.816 Nvme3n1 : 1.17 219.51 13.72 0.00 0.00 279401.05 40777.96 250104.79 00:28:01.816 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.816 Verification LBA range: start 0x0 length 0x400 00:28:01.816 Nvme4n1 : 1.17 276.15 17.26 0.00 0.00 217579.97 8058.50 246997.90 00:28:01.816 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.816 Verification LBA range: start 0x0 length 0x400 00:28:01.816 Nvme5n1 : 1.14 224.92 14.06 0.00 0.00 263442.20 20777.34 237677.23 00:28:01.816 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.816 Verification LBA range: start 0x0 length 0x400 00:28:01.816 Nvme6n1 : 1.17 219.03 13.69 0.00 0.00 266592.71 22913.33 239230.67 00:28:01.816 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.816 Verification LBA range: start 0x0 length 0x400 00:28:01.816 Nvme7n1 : 1.19 214.23 13.39 0.00 0.00 268662.71 25826.04 265639.25 00:28:01.816 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.816 Verification LBA range: start 0x0 length 0x400 00:28:01.816 Nvme8n1 : 1.19 269.01 16.81 0.00 0.00 210280.90 20000.62 270299.59 00:28:01.816 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.816 Verification LBA range: start 0x0 length 0x400 00:28:01.816 Nvme9n1 : 1.20 267.18 16.70 0.00 0.00 208103.20 15534.46 251658.24 00:28:01.816 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.816 Verification LBA range: start 0x0 length 0x400 00:28:01.816 Nvme10n1 : 1.18 216.74 13.55 0.00 0.00 251904.57 22233.69 285834.05 00:28:01.816 =================================================================================================================== 00:28:01.816 Total : 2383.66 148.98 0.00 0.00 246481.14 7961.41 285834.05 00:28:01.816 22:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:28:01.816 22:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:01.816 22:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:01.817 22:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:01.817 22:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:01.817 22:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:01.817 22:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:28:01.817 22:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:01.817 22:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:28:01.817 22:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:01.817 22:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:01.817 rmmod nvme_tcp 00:28:02.076 rmmod nvme_fabrics 00:28:02.076 rmmod nvme_keyring 00:28:02.076 22:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:02.076 22:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:28:02.076 22:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:28:02.076 22:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3625606 ']' 00:28:02.076 22:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3625606 00:28:02.076 22:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 3625606 ']' 00:28:02.076 22:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 3625606 00:28:02.076 22:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:28:02.076 22:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:02.076 22:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3625606 00:28:02.076 22:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:02.076 22:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:02.076 22:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3625606' 00:28:02.076 killing process with pid 3625606 00:28:02.076 22:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 3625606 00:28:02.076 22:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 3625606 00:28:02.673 22:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:02.673 22:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:02.673 22:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:02.673 22:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:02.673 22:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:02.673 22:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:02.673 22:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:02.673 22:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.588 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:04.588 00:28:04.588 real 0m11.584s 00:28:04.588 user 0m33.660s 00:28:04.588 sys 0m3.064s 00:28:04.588 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:04.588 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:04.588 ************************************ 00:28:04.588 END TEST nvmf_shutdown_tc1 00:28:04.588 ************************************ 00:28:04.588 22:57:56 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:04.588 22:57:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:04.588 22:57:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:04.588 22:57:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:04.588 ************************************ 00:28:04.588 START TEST nvmf_shutdown_tc2 00:28:04.588 ************************************ 00:28:04.588 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:28:04.588 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:04.589 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:04.589 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:04.589 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:04.589 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:04.589 22:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:04.589 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:04.589 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:04.589 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:04.589 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:04.589 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:04.589 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:04.589 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:04.589 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:04.589 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:04.589 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:04.589 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:04.589 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:04.589 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:04.589 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:04.848 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:04.848 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:04.848 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:04.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:04.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:28:04.848 00:28:04.848 --- 10.0.0.2 ping statistics --- 00:28:04.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.848 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:28:04.848 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:04.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:04.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:28:04.848 00:28:04.848 --- 10.0.0.1 ping statistics --- 00:28:04.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.848 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:28:04.848 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:04.848 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:28:04.848 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:04.848 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:04.848 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:04.848 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:04.848 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:04.848 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:04.848 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:04.848 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:04.848 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:04.848 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:04.848 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:04.848 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3626969 00:28:04.848 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:04.848 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3626969 00:28:04.848 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3626969 ']' 00:28:04.848 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:04.848 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:04.848 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:04.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:04.848 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:04.848 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:04.848 [2024-07-26 22:57:57.206348] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:28:04.848 [2024-07-26 22:57:57.206418] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:04.848 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.848 [2024-07-26 22:57:57.272749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:05.106 [2024-07-26 22:57:57.363651] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:05.106 [2024-07-26 22:57:57.363712] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:05.106 [2024-07-26 22:57:57.363738] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:05.106 [2024-07-26 22:57:57.363752] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:05.106 [2024-07-26 22:57:57.363764] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:05.106 [2024-07-26 22:57:57.363851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:05.106 [2024-07-26 22:57:57.363963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:05.106 [2024-07-26 22:57:57.364030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:05.106 [2024-07-26 22:57:57.364032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:05.106 [2024-07-26 22:57:57.503587] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.106 22:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:05.106 Malloc1 00:28:05.106 [2024-07-26 22:57:57.578439] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:05.106 Malloc2 00:28:05.366 Malloc3 00:28:05.366 Malloc4 00:28:05.366 Malloc5 00:28:05.366 Malloc6 00:28:05.366 Malloc7 00:28:05.625 Malloc8 00:28:05.625 Malloc9 00:28:05.625 Malloc10 00:28:05.625 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.625 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:05.625 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:05.625 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:05.625 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3627031 00:28:05.625 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3627031 /var/tmp/bdevperf.sock 00:28:05.625 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3627031 ']' 00:28:05.625 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:05.625 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:05.625 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:05.625 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:05.625 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:28:05.625 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:05.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:05.625 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:28:05.625 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:05.625 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.625 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:05.625 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.625 { 00:28:05.625 "params": { 00:28:05.625 "name": "Nvme$subsystem", 00:28:05.625 "trtype": "$TEST_TRANSPORT", 00:28:05.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.625 "adrfam": "ipv4", 00:28:05.625 "trsvcid": "$NVMF_PORT", 00:28:05.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.625 "hdgst": ${hdgst:-false}, 00:28:05.625 "ddgst": ${ddgst:-false} 00:28:05.625 }, 00:28:05.625 "method": "bdev_nvme_attach_controller" 00:28:05.625 } 00:28:05.625 EOF 00:28:05.625 )") 00:28:05.625 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:05.625 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.625 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.625 { 00:28:05.625 "params": { 00:28:05.625 "name": "Nvme$subsystem", 00:28:05.625 "trtype": "$TEST_TRANSPORT", 00:28:05.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.625 "adrfam": "ipv4", 00:28:05.625 "trsvcid": "$NVMF_PORT", 00:28:05.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.625 "hdgst": ${hdgst:-false}, 00:28:05.625 "ddgst": ${ddgst:-false} 00:28:05.625 }, 00:28:05.625 "method": "bdev_nvme_attach_controller" 00:28:05.625 } 00:28:05.625 EOF 00:28:05.625 )") 00:28:05.625 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:05.625 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.625 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.625 { 00:28:05.625 "params": { 00:28:05.625 "name": "Nvme$subsystem", 00:28:05.625 "trtype": "$TEST_TRANSPORT", 00:28:05.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.625 "adrfam": "ipv4", 00:28:05.625 "trsvcid": "$NVMF_PORT", 00:28:05.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.625 "hdgst": ${hdgst:-false}, 00:28:05.625 "ddgst": ${ddgst:-false} 00:28:05.625 }, 00:28:05.625 "method": "bdev_nvme_attach_controller" 00:28:05.625 } 00:28:05.625 EOF 00:28:05.625 )") 00:28:05.625 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:05.625 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.625 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.625 { 00:28:05.625 "params": { 00:28:05.625 "name": "Nvme$subsystem", 00:28:05.625 "trtype": "$TEST_TRANSPORT", 00:28:05.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.625 "adrfam": "ipv4", 00:28:05.625 "trsvcid": "$NVMF_PORT", 00:28:05.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.625 "hdgst": ${hdgst:-false}, 00:28:05.625 "ddgst": ${ddgst:-false} 00:28:05.625 }, 00:28:05.625 "method": "bdev_nvme_attach_controller" 00:28:05.625 } 00:28:05.625 EOF 00:28:05.625 )") 00:28:05.625 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:05.625 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.625 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.625 { 00:28:05.625 "params": { 00:28:05.625 "name": "Nvme$subsystem", 00:28:05.625 "trtype": "$TEST_TRANSPORT", 00:28:05.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.625 "adrfam": "ipv4", 00:28:05.625 "trsvcid": "$NVMF_PORT", 00:28:05.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.626 "hdgst": ${hdgst:-false}, 00:28:05.626 "ddgst": ${ddgst:-false} 00:28:05.626 }, 00:28:05.626 "method": "bdev_nvme_attach_controller" 00:28:05.626 } 00:28:05.626 EOF 00:28:05.626 )") 00:28:05.626 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:05.626 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.626 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.626 { 00:28:05.626 "params": { 00:28:05.626 "name": "Nvme$subsystem", 00:28:05.626 "trtype": "$TEST_TRANSPORT", 00:28:05.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.626 "adrfam": "ipv4", 00:28:05.626 "trsvcid": "$NVMF_PORT", 00:28:05.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.626 "hdgst": ${hdgst:-false}, 00:28:05.626 "ddgst": ${ddgst:-false} 00:28:05.626 }, 00:28:05.626 "method": "bdev_nvme_attach_controller" 00:28:05.626 } 00:28:05.626 EOF 00:28:05.626 )") 00:28:05.626 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:05.626 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.626 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.626 { 00:28:05.626 "params": { 00:28:05.626 "name": "Nvme$subsystem", 00:28:05.626 "trtype": "$TEST_TRANSPORT", 00:28:05.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.626 "adrfam": "ipv4", 00:28:05.626 "trsvcid": "$NVMF_PORT", 00:28:05.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.626 "hdgst": ${hdgst:-false}, 00:28:05.626 "ddgst": ${ddgst:-false} 00:28:05.626 }, 00:28:05.626 "method": "bdev_nvme_attach_controller" 00:28:05.626 } 00:28:05.626 EOF 00:28:05.626 )") 00:28:05.626 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:05.626 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.626 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.626 { 00:28:05.626 "params": { 00:28:05.626 "name": "Nvme$subsystem", 00:28:05.626 "trtype": "$TEST_TRANSPORT", 00:28:05.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.626 "adrfam": "ipv4", 00:28:05.626 "trsvcid": "$NVMF_PORT", 00:28:05.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.626 "hdgst": ${hdgst:-false}, 00:28:05.626 "ddgst": ${ddgst:-false} 00:28:05.626 }, 00:28:05.626 "method": "bdev_nvme_attach_controller" 00:28:05.626 } 00:28:05.626 EOF 00:28:05.626 )") 00:28:05.626 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:05.626 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.626 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.626 { 00:28:05.626 "params": { 00:28:05.626 "name": "Nvme$subsystem", 00:28:05.626 "trtype": "$TEST_TRANSPORT", 00:28:05.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.626 "adrfam": "ipv4", 00:28:05.626 "trsvcid": "$NVMF_PORT", 00:28:05.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.626 "hdgst": ${hdgst:-false}, 00:28:05.626 "ddgst": ${ddgst:-false} 00:28:05.626 }, 00:28:05.626 "method": "bdev_nvme_attach_controller" 00:28:05.626 } 00:28:05.626 EOF 00:28:05.626 )") 00:28:05.626 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:05.626 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.626 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.626 { 00:28:05.626 "params": { 00:28:05.626 "name": "Nvme$subsystem", 00:28:05.626 "trtype": "$TEST_TRANSPORT", 00:28:05.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.626 "adrfam": "ipv4", 00:28:05.626 "trsvcid": "$NVMF_PORT", 00:28:05.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.626 "hdgst": ${hdgst:-false}, 00:28:05.626 "ddgst": ${ddgst:-false} 00:28:05.626 }, 00:28:05.626 "method": "bdev_nvme_attach_controller" 00:28:05.626 } 00:28:05.626 EOF 00:28:05.626 )") 00:28:05.626 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:05.626 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:28:05.626 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:28:05.626 22:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:05.626 "params": { 00:28:05.626 "name": "Nvme1", 00:28:05.626 "trtype": "tcp", 00:28:05.626 "traddr": "10.0.0.2", 00:28:05.626 "adrfam": "ipv4", 00:28:05.626 "trsvcid": "4420", 00:28:05.626 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:05.626 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:05.626 "hdgst": false, 00:28:05.626 "ddgst": false 00:28:05.626 }, 00:28:05.626 "method": "bdev_nvme_attach_controller" 00:28:05.626 },{ 00:28:05.626 "params": { 00:28:05.626 "name": "Nvme2", 00:28:05.626 "trtype": "tcp", 00:28:05.626 "traddr": "10.0.0.2", 00:28:05.626 "adrfam": "ipv4", 00:28:05.626 "trsvcid": "4420", 00:28:05.626 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:05.626 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:05.626 "hdgst": false, 00:28:05.626 "ddgst": false 00:28:05.626 }, 00:28:05.626 "method": "bdev_nvme_attach_controller" 00:28:05.626 },{ 00:28:05.626 "params": { 00:28:05.626 "name": "Nvme3", 00:28:05.626 "trtype": "tcp", 00:28:05.626 "traddr": "10.0.0.2", 00:28:05.626 "adrfam": "ipv4", 00:28:05.626 "trsvcid": "4420", 00:28:05.626 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:05.626 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:05.626 "hdgst": false, 00:28:05.626 "ddgst": false 00:28:05.626 }, 00:28:05.626 "method": "bdev_nvme_attach_controller" 00:28:05.626 },{ 00:28:05.626 "params": { 00:28:05.626 "name": "Nvme4", 00:28:05.626 "trtype": "tcp", 00:28:05.626 "traddr": "10.0.0.2", 00:28:05.626 "adrfam": "ipv4", 00:28:05.626 "trsvcid": "4420", 00:28:05.626 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:05.626 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:05.626 "hdgst": false, 00:28:05.626 "ddgst": false 00:28:05.626 }, 00:28:05.626 "method": "bdev_nvme_attach_controller" 00:28:05.626 },{ 00:28:05.626 "params": { 00:28:05.626 "name": "Nvme5", 00:28:05.626 "trtype": "tcp", 00:28:05.626 "traddr": "10.0.0.2", 00:28:05.626 "adrfam": "ipv4", 00:28:05.626 "trsvcid": "4420", 00:28:05.626 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:05.626 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:05.626 "hdgst": false, 00:28:05.626 "ddgst": false 00:28:05.626 }, 00:28:05.626 "method": "bdev_nvme_attach_controller" 00:28:05.626 },{ 00:28:05.626 "params": { 00:28:05.626 "name": "Nvme6", 00:28:05.626 "trtype": "tcp", 00:28:05.626 "traddr": "10.0.0.2", 00:28:05.626 "adrfam": "ipv4", 00:28:05.626 "trsvcid": "4420", 00:28:05.626 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:05.626 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:05.626 "hdgst": false, 00:28:05.626 "ddgst": false 00:28:05.626 }, 00:28:05.626 "method": "bdev_nvme_attach_controller" 00:28:05.626 },{ 00:28:05.626 "params": { 00:28:05.626 "name": "Nvme7", 00:28:05.626 "trtype": "tcp", 00:28:05.626 "traddr": "10.0.0.2", 00:28:05.626 "adrfam": "ipv4", 00:28:05.626 "trsvcid": "4420", 00:28:05.626 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:05.626 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:05.626 "hdgst": false, 00:28:05.626 "ddgst": false 00:28:05.626 }, 00:28:05.626 "method": "bdev_nvme_attach_controller" 00:28:05.626 },{ 00:28:05.626 "params": { 00:28:05.626 "name": "Nvme8", 00:28:05.626 "trtype": "tcp", 00:28:05.626 "traddr": "10.0.0.2", 00:28:05.626 "adrfam": "ipv4", 00:28:05.626 "trsvcid": "4420", 00:28:05.626 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:05.626 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:05.626 "hdgst": false, 00:28:05.626 "ddgst": false 00:28:05.626 }, 00:28:05.626 "method": "bdev_nvme_attach_controller" 00:28:05.626 },{ 00:28:05.626 "params": { 00:28:05.626 "name": "Nvme9", 00:28:05.626 "trtype": "tcp", 00:28:05.626 "traddr": "10.0.0.2", 00:28:05.626 "adrfam": "ipv4", 00:28:05.626 "trsvcid": "4420", 00:28:05.626 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:05.627 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:05.627 "hdgst": false, 00:28:05.627 "ddgst": false 00:28:05.627 }, 00:28:05.627 "method": "bdev_nvme_attach_controller" 00:28:05.627 },{ 00:28:05.627 "params": { 00:28:05.627 "name": "Nvme10", 00:28:05.627 "trtype": "tcp", 00:28:05.627 "traddr": "10.0.0.2", 00:28:05.627 "adrfam": "ipv4", 00:28:05.627 "trsvcid": "4420", 00:28:05.627 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:05.627 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:05.627 "hdgst": false, 00:28:05.627 "ddgst": false 00:28:05.627 }, 00:28:05.627 "method": "bdev_nvme_attach_controller" 00:28:05.627 }' 00:28:05.627 [2024-07-26 22:57:58.087555] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:28:05.627 [2024-07-26 22:57:58.087642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3627031 ] 00:28:05.627 EAL: No free 2048 kB hugepages reported on node 1 00:28:05.886 [2024-07-26 22:57:58.152745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.886 [2024-07-26 22:57:58.241117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.263 Running I/O for 10 seconds... 00:28:07.830 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:07.830 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:28:07.830 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:07.830 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.830 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:07.830 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.830 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:07.830 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:07.830 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:07.830 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:28:07.830 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:28:07.830 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:07.830 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:07.830 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:07.830 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:07.830 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.830 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:07.830 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.830 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:28:07.830 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:28:07.830 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:08.090 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:08.090 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:08.090 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:08.090 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:08.090 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.090 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:08.090 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.090 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=194 00:28:08.090 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 194 -ge 100 ']' 00:28:08.090 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:28:08.090 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:28:08.090 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:28:08.090 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3627031 00:28:08.090 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 3627031 ']' 00:28:08.090 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 3627031 00:28:08.090 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:28:08.090 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:08.090 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3627031 00:28:08.090 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:08.090 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:08.090 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3627031' 00:28:08.090 killing process with pid 3627031 00:28:08.090 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 3627031 00:28:08.090 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 3627031 00:28:08.349 Received shutdown signal, test time was about 1.045765 seconds 00:28:08.349 00:28:08.349 Latency(us) 00:28:08.349 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.349 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.349 Verification LBA range: start 0x0 length 0x400 00:28:08.349 Nvme1n1 : 1.04 247.14 15.45 0.00 0.00 255512.46 20388.98 270299.59 00:28:08.349 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.349 Verification LBA range: start 0x0 length 0x400 00:28:08.349 Nvme2n1 : 0.99 194.10 12.13 0.00 0.00 317243.04 19806.44 271853.04 00:28:08.349 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.349 Verification LBA range: start 0x0 length 0x400 00:28:08.349 Nvme3n1 : 1.04 246.05 15.38 0.00 0.00 245350.78 22136.60 260978.92 00:28:08.349 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.349 Verification LBA range: start 0x0 length 0x400 00:28:08.349 Nvme4n1 : 1.03 247.93 15.50 0.00 0.00 236245.90 32428.18 234570.33 00:28:08.349 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.349 Verification LBA range: start 0x0 length 0x400 00:28:08.349 Nvme5n1 : 1.04 244.99 15.31 0.00 0.00 234946.37 23107.51 282727.16 00:28:08.349 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.349 Verification LBA range: start 0x0 length 0x400 00:28:08.349 Nvme6n1 : 1.02 188.55 11.78 0.00 0.00 296872.77 23398.78 292047.83 00:28:08.349 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.349 Verification LBA range: start 0x0 length 0x400 00:28:08.349 Nvme7n1 : 1.00 192.21 12.01 0.00 0.00 282943.08 23204.60 268746.15 00:28:08.349 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.349 Verification LBA range: start 0x0 length 0x400 00:28:08.349 Nvme8n1 : 1.02 187.89 11.74 0.00 0.00 283183.03 21651.15 284280.60 00:28:08.349 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.349 Verification LBA range: start 0x0 length 0x400 00:28:08.349 Nvme9n1 : 1.03 186.82 11.68 0.00 0.00 277622.83 23204.60 302921.96 00:28:08.349 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.349 Verification LBA range: start 0x0 length 0x400 00:28:08.349 Nvme10n1 : 1.01 190.68 11.92 0.00 0.00 263455.73 20874.43 253211.69 00:28:08.349 =================================================================================================================== 00:28:08.349 Total : 2126.34 132.90 0.00 0.00 266240.69 19806.44 302921.96 00:28:08.349 22:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:28:09.725 22:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3626969 00:28:09.725 22:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:28:09.725 22:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:09.725 22:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:09.725 22:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:09.725 22:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:09.725 22:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:09.725 22:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:28:09.725 22:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:09.725 22:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:28:09.725 22:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:09.725 22:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:09.725 rmmod nvme_tcp 00:28:09.725 rmmod nvme_fabrics 00:28:09.725 rmmod nvme_keyring 00:28:09.725 22:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:09.725 22:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:28:09.725 22:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:28:09.725 22:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3626969 ']' 00:28:09.725 22:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3626969 00:28:09.725 22:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 3626969 ']' 00:28:09.725 22:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 3626969 00:28:09.725 22:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:28:09.725 22:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:09.725 22:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3626969 00:28:09.725 22:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:09.725 22:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:09.725 22:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3626969' 00:28:09.725 killing process with pid 3626969 00:28:09.725 22:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 3626969 00:28:09.725 22:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 3626969 00:28:09.985 22:58:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:09.985 22:58:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:09.985 22:58:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:09.985 22:58:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:09.985 22:58:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:09.985 22:58:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.985 22:58:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:09.985 22:58:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:12.515 00:28:12.515 real 0m7.485s 00:28:12.515 user 0m22.133s 00:28:12.515 sys 0m1.543s 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:12.515 ************************************ 00:28:12.515 END TEST nvmf_shutdown_tc2 00:28:12.515 ************************************ 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:12.515 ************************************ 00:28:12.515 START TEST nvmf_shutdown_tc3 00:28:12.515 ************************************ 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:12.515 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:12.516 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:12.516 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:12.516 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:12.516 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:12.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:12.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:28:12.516 00:28:12.516 --- 10.0.0.2 ping statistics --- 00:28:12.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.516 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:12.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:12.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:28:12.516 00:28:12.516 --- 10.0.0.1 ping statistics --- 00:28:12.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.516 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3627935 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3627935 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 3627935 ']' 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:12.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:12.516 22:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:12.516 [2024-07-26 22:58:04.740909] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:28:12.517 [2024-07-26 22:58:04.740978] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:12.517 EAL: No free 2048 kB hugepages reported on node 1 00:28:12.517 [2024-07-26 22:58:04.807108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:12.517 [2024-07-26 22:58:04.899815] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:12.517 [2024-07-26 22:58:04.899879] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:12.517 [2024-07-26 22:58:04.899912] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:12.517 [2024-07-26 22:58:04.899927] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:12.517 [2024-07-26 22:58:04.899939] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:12.517 [2024-07-26 22:58:04.900038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:12.517 [2024-07-26 22:58:04.900151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:12.517 [2024-07-26 22:58:04.900180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:12.517 [2024-07-26 22:58:04.900184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.774 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:12.774 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:28:12.774 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:12.774 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:12.774 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:12.774 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:12.774 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:12.774 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.774 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:12.774 [2024-07-26 22:58:05.044658] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:12.774 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.774 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:12.774 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:12.774 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:12.774 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:12.774 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:12.774 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:12.774 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:12.774 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:12.775 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:12.775 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:12.775 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:12.775 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:12.775 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:12.775 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:12.775 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:12.775 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:12.775 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:12.775 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:12.775 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:12.775 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:12.775 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:12.775 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:12.775 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:12.775 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:12.775 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:12.775 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:12.775 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.775 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:12.775 Malloc1 00:28:12.775 [2024-07-26 22:58:05.119629] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:12.775 Malloc2 00:28:12.775 Malloc3 00:28:12.775 Malloc4 00:28:13.033 Malloc5 00:28:13.033 Malloc6 00:28:13.033 Malloc7 00:28:13.033 Malloc8 00:28:13.033 Malloc9 00:28:13.033 Malloc10 00:28:13.292 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.292 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:13.292 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:13.292 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:13.292 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3628109 00:28:13.292 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3628109 /var/tmp/bdevperf.sock 00:28:13.292 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 3628109 ']' 00:28:13.292 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:13.292 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:13.292 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:13.292 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:13.292 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:28:13.292 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:13.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:13.292 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:28:13.292 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:13.292 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:13.292 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:13.292 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:13.292 { 00:28:13.292 "params": { 00:28:13.292 "name": "Nvme$subsystem", 00:28:13.292 "trtype": "$TEST_TRANSPORT", 00:28:13.292 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.292 "adrfam": "ipv4", 00:28:13.292 "trsvcid": "$NVMF_PORT", 00:28:13.292 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.292 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.292 "hdgst": ${hdgst:-false}, 00:28:13.292 "ddgst": ${ddgst:-false} 00:28:13.292 }, 00:28:13.292 "method": "bdev_nvme_attach_controller" 00:28:13.292 } 00:28:13.292 EOF 00:28:13.292 )") 00:28:13.292 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:13.292 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:13.292 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:13.292 { 00:28:13.292 "params": { 00:28:13.292 "name": "Nvme$subsystem", 00:28:13.292 "trtype": "$TEST_TRANSPORT", 00:28:13.292 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.292 "adrfam": "ipv4", 00:28:13.292 "trsvcid": "$NVMF_PORT", 00:28:13.292 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.292 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.292 "hdgst": ${hdgst:-false}, 00:28:13.292 "ddgst": ${ddgst:-false} 00:28:13.292 }, 00:28:13.292 "method": "bdev_nvme_attach_controller" 00:28:13.292 } 00:28:13.292 EOF 00:28:13.292 )") 00:28:13.292 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:13.292 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:13.292 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:13.292 { 00:28:13.292 "params": { 00:28:13.293 "name": "Nvme$subsystem", 00:28:13.293 "trtype": "$TEST_TRANSPORT", 00:28:13.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.293 "adrfam": "ipv4", 00:28:13.293 "trsvcid": "$NVMF_PORT", 00:28:13.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.293 "hdgst": ${hdgst:-false}, 00:28:13.293 "ddgst": ${ddgst:-false} 00:28:13.293 }, 00:28:13.293 "method": "bdev_nvme_attach_controller" 00:28:13.293 } 00:28:13.293 EOF 00:28:13.293 )") 00:28:13.293 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:13.293 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:13.293 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:13.293 { 00:28:13.293 "params": { 00:28:13.293 "name": "Nvme$subsystem", 00:28:13.293 "trtype": "$TEST_TRANSPORT", 00:28:13.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.293 "adrfam": "ipv4", 00:28:13.293 "trsvcid": "$NVMF_PORT", 00:28:13.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.293 "hdgst": ${hdgst:-false}, 00:28:13.293 "ddgst": ${ddgst:-false} 00:28:13.293 }, 00:28:13.293 "method": "bdev_nvme_attach_controller" 00:28:13.293 } 00:28:13.293 EOF 00:28:13.293 )") 00:28:13.293 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:13.293 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:13.293 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:13.293 { 00:28:13.293 "params": { 00:28:13.293 "name": "Nvme$subsystem", 00:28:13.293 "trtype": "$TEST_TRANSPORT", 00:28:13.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.293 "adrfam": "ipv4", 00:28:13.293 "trsvcid": "$NVMF_PORT", 00:28:13.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.293 "hdgst": ${hdgst:-false}, 00:28:13.293 "ddgst": ${ddgst:-false} 00:28:13.293 }, 00:28:13.293 "method": "bdev_nvme_attach_controller" 00:28:13.293 } 00:28:13.293 EOF 00:28:13.293 )") 00:28:13.293 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:13.293 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:13.293 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:13.293 { 00:28:13.293 "params": { 00:28:13.293 "name": "Nvme$subsystem", 00:28:13.293 "trtype": "$TEST_TRANSPORT", 00:28:13.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.293 "adrfam": "ipv4", 00:28:13.293 "trsvcid": "$NVMF_PORT", 00:28:13.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.293 "hdgst": ${hdgst:-false}, 00:28:13.293 "ddgst": ${ddgst:-false} 00:28:13.293 }, 00:28:13.293 "method": "bdev_nvme_attach_controller" 00:28:13.293 } 00:28:13.293 EOF 00:28:13.293 )") 00:28:13.293 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:13.293 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:13.293 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:13.293 { 00:28:13.293 "params": { 00:28:13.293 "name": "Nvme$subsystem", 00:28:13.293 "trtype": "$TEST_TRANSPORT", 00:28:13.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.293 "adrfam": "ipv4", 00:28:13.293 "trsvcid": "$NVMF_PORT", 00:28:13.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.293 "hdgst": ${hdgst:-false}, 00:28:13.293 "ddgst": ${ddgst:-false} 00:28:13.293 }, 00:28:13.293 "method": "bdev_nvme_attach_controller" 00:28:13.293 } 00:28:13.293 EOF 00:28:13.293 )") 00:28:13.293 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:13.293 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:13.293 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:13.293 { 00:28:13.293 "params": { 00:28:13.293 "name": "Nvme$subsystem", 00:28:13.293 "trtype": "$TEST_TRANSPORT", 00:28:13.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.293 "adrfam": "ipv4", 00:28:13.293 "trsvcid": "$NVMF_PORT", 00:28:13.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.293 "hdgst": ${hdgst:-false}, 00:28:13.293 "ddgst": ${ddgst:-false} 00:28:13.293 }, 00:28:13.293 "method": "bdev_nvme_attach_controller" 00:28:13.293 } 00:28:13.293 EOF 00:28:13.293 )") 00:28:13.293 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:13.293 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:13.293 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:13.293 { 00:28:13.293 "params": { 00:28:13.293 "name": "Nvme$subsystem", 00:28:13.293 "trtype": "$TEST_TRANSPORT", 00:28:13.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.293 "adrfam": "ipv4", 00:28:13.293 "trsvcid": "$NVMF_PORT", 00:28:13.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.293 "hdgst": ${hdgst:-false}, 00:28:13.293 "ddgst": ${ddgst:-false} 00:28:13.293 }, 00:28:13.293 "method": "bdev_nvme_attach_controller" 00:28:13.293 } 00:28:13.293 EOF 00:28:13.293 )") 00:28:13.293 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:13.293 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:13.293 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:13.293 { 00:28:13.293 "params": { 00:28:13.293 "name": "Nvme$subsystem", 00:28:13.293 "trtype": "$TEST_TRANSPORT", 00:28:13.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.293 "adrfam": "ipv4", 00:28:13.293 "trsvcid": "$NVMF_PORT", 00:28:13.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.293 "hdgst": ${hdgst:-false}, 00:28:13.293 "ddgst": ${ddgst:-false} 00:28:13.293 }, 00:28:13.293 "method": "bdev_nvme_attach_controller" 00:28:13.293 } 00:28:13.293 EOF 00:28:13.293 )") 00:28:13.293 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:13.293 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:28:13.293 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:28:13.293 22:58:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:13.293 "params": { 00:28:13.293 "name": "Nvme1", 00:28:13.293 "trtype": "tcp", 00:28:13.293 "traddr": "10.0.0.2", 00:28:13.293 "adrfam": "ipv4", 00:28:13.293 "trsvcid": "4420", 00:28:13.293 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:13.293 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:13.293 "hdgst": false, 00:28:13.293 "ddgst": false 00:28:13.293 }, 00:28:13.293 "method": "bdev_nvme_attach_controller" 00:28:13.293 },{ 00:28:13.293 "params": { 00:28:13.293 "name": "Nvme2", 00:28:13.293 "trtype": "tcp", 00:28:13.293 "traddr": "10.0.0.2", 00:28:13.293 "adrfam": "ipv4", 00:28:13.293 "trsvcid": "4420", 00:28:13.293 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:13.293 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:13.293 "hdgst": false, 00:28:13.293 "ddgst": false 00:28:13.293 }, 00:28:13.293 "method": "bdev_nvme_attach_controller" 00:28:13.293 },{ 00:28:13.293 "params": { 00:28:13.293 "name": "Nvme3", 00:28:13.293 "trtype": "tcp", 00:28:13.293 "traddr": "10.0.0.2", 00:28:13.293 "adrfam": "ipv4", 00:28:13.293 "trsvcid": "4420", 00:28:13.293 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:13.293 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:13.293 "hdgst": false, 00:28:13.293 "ddgst": false 00:28:13.293 }, 00:28:13.293 "method": "bdev_nvme_attach_controller" 00:28:13.293 },{ 00:28:13.293 "params": { 00:28:13.293 "name": "Nvme4", 00:28:13.293 "trtype": "tcp", 00:28:13.293 "traddr": "10.0.0.2", 00:28:13.293 "adrfam": "ipv4", 00:28:13.293 "trsvcid": "4420", 00:28:13.293 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:13.293 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:13.293 "hdgst": false, 00:28:13.293 "ddgst": false 00:28:13.293 }, 00:28:13.293 "method": "bdev_nvme_attach_controller" 00:28:13.293 },{ 00:28:13.293 "params": { 00:28:13.293 "name": "Nvme5", 00:28:13.293 "trtype": "tcp", 00:28:13.293 "traddr": "10.0.0.2", 00:28:13.293 "adrfam": "ipv4", 00:28:13.293 "trsvcid": "4420", 00:28:13.293 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:13.293 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:13.293 "hdgst": false, 00:28:13.293 "ddgst": false 00:28:13.293 }, 00:28:13.294 "method": "bdev_nvme_attach_controller" 00:28:13.294 },{ 00:28:13.294 "params": { 00:28:13.294 "name": "Nvme6", 00:28:13.294 "trtype": "tcp", 00:28:13.294 "traddr": "10.0.0.2", 00:28:13.294 "adrfam": "ipv4", 00:28:13.294 "trsvcid": "4420", 00:28:13.294 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:13.294 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:13.294 "hdgst": false, 00:28:13.294 "ddgst": false 00:28:13.294 }, 00:28:13.294 "method": "bdev_nvme_attach_controller" 00:28:13.294 },{ 00:28:13.294 "params": { 00:28:13.294 "name": "Nvme7", 00:28:13.294 "trtype": "tcp", 00:28:13.294 "traddr": "10.0.0.2", 00:28:13.294 "adrfam": "ipv4", 00:28:13.294 "trsvcid": "4420", 00:28:13.294 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:13.294 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:13.294 "hdgst": false, 00:28:13.294 "ddgst": false 00:28:13.294 }, 00:28:13.294 "method": "bdev_nvme_attach_controller" 00:28:13.294 },{ 00:28:13.294 "params": { 00:28:13.294 "name": "Nvme8", 00:28:13.294 "trtype": "tcp", 00:28:13.294 "traddr": "10.0.0.2", 00:28:13.294 "adrfam": "ipv4", 00:28:13.294 "trsvcid": "4420", 00:28:13.294 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:13.294 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:13.294 "hdgst": false, 00:28:13.294 "ddgst": false 00:28:13.294 }, 00:28:13.294 "method": "bdev_nvme_attach_controller" 00:28:13.294 },{ 00:28:13.294 "params": { 00:28:13.294 "name": "Nvme9", 00:28:13.294 "trtype": "tcp", 00:28:13.294 "traddr": "10.0.0.2", 00:28:13.294 "adrfam": "ipv4", 00:28:13.294 "trsvcid": "4420", 00:28:13.294 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:13.294 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:13.294 "hdgst": false, 00:28:13.294 "ddgst": false 00:28:13.294 }, 00:28:13.294 "method": "bdev_nvme_attach_controller" 00:28:13.294 },{ 00:28:13.294 "params": { 00:28:13.294 "name": "Nvme10", 00:28:13.294 "trtype": "tcp", 00:28:13.294 "traddr": "10.0.0.2", 00:28:13.294 "adrfam": "ipv4", 00:28:13.294 "trsvcid": "4420", 00:28:13.294 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:13.294 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:13.294 "hdgst": false, 00:28:13.294 "ddgst": false 00:28:13.294 }, 00:28:13.294 "method": "bdev_nvme_attach_controller" 00:28:13.294 }' 00:28:13.294 [2024-07-26 22:58:05.620126] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:28:13.294 [2024-07-26 22:58:05.620208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3628109 ] 00:28:13.294 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.294 [2024-07-26 22:58:05.684174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.294 [2024-07-26 22:58:05.771196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.825 Running I/O for 10 seconds... 00:28:15.825 22:58:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:15.825 22:58:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:28:15.825 22:58:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:15.825 22:58:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.825 22:58:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:15.825 22:58:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.825 22:58:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:15.825 22:58:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:15.825 22:58:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:15.825 22:58:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:15.825 22:58:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:28:15.825 22:58:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:28:15.825 22:58:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:15.825 22:58:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:15.825 22:58:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:15.825 22:58:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:15.825 22:58:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.825 22:58:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:15.825 22:58:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.825 22:58:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:28:15.825 22:58:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:28:15.825 22:58:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:15.825 22:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:15.825 22:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:15.825 22:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:15.825 22:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:15.825 22:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.825 22:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:15.825 22:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.825 22:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:28:15.825 22:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:28:15.825 22:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:16.084 22:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:16.084 22:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:16.084 22:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:16.084 22:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.085 22:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:16.085 22:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:16.085 22:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.085 22:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:28:16.085 22:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:28:16.085 22:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:28:16.085 22:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:28:16.085 22:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:28:16.085 22:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3627935 00:28:16.085 22:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 3627935 ']' 00:28:16.085 22:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 3627935 00:28:16.085 22:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:28:16.085 22:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:16.085 22:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3627935 00:28:16.085 22:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:16.085 22:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:16.085 22:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3627935' 00:28:16.085 killing process with pid 3627935 00:28:16.085 22:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 3627935 00:28:16.085 22:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 3627935 00:28:16.085 [2024-07-26 22:58:08.576329] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576456] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576469] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576483] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576495] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576507] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576519] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576543] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576555] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576567] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576582] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576594] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576607] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576619] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576633] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576645] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576657] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576670] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576683] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576695] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576731] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576743] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576755] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576781] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576804] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576816] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576828] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576842] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576855] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576866] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576880] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576893] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576906] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576918] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576931] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576944] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576956] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576968] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576982] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.576994] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.577006] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.577018] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.577032] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.577053] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.577091] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.577109] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.577122] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.577137] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.577150] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.577163] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.577175] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.577189] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.577203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.577215] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.577227] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.577242] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.577255] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.577267] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3a90 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.578485] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf8a0 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.578517] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf8a0 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.578531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf8a0 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.578544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf8a0 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.578555] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf8a0 is same with the state(5) to be set 00:28:16.085 [2024-07-26 22:58:08.578567] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf8a0 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.578580] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf8a0 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579296] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579322] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579337] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579352] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579377] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579389] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579439] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579451] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579464] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579476] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579488] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579501] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579513] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579526] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579538] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579551] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579563] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579588] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579616] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579641] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579653] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579677] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579689] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579701] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579725] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579737] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579750] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579762] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579778] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579791] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579804] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579816] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579829] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579853] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579866] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579878] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579891] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579904] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579916] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579928] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579940] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579952] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579965] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579977] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.579990] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.580002] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.580014] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.580027] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.580039] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.580088] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.580103] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.580115] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.580128] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.580140] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.580156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.580169] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3f30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.582885] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.582913] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.582934] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.582946] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.582958] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.582971] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.582983] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.582996] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.583008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.583021] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.583033] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.583051] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.583073] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.583087] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.583100] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.583112] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.583125] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.583138] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.086 [2024-07-26 22:58:08.583151] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583163] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583176] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583188] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583201] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583227] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583245] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583271] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583283] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583296] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583308] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583321] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583333] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583358] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583399] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583411] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583423] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583436] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583448] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583461] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583473] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583485] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583497] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583538] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583550] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583563] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583575] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583588] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583600] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583613] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583642] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583655] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583668] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583680] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583693] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583705] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583718] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583730] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583742] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.583754] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d30 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.585561] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.585596] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.585613] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.585625] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.585638] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.585652] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.585665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.585677] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.585690] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.585702] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.585716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.585729] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.585742] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.585755] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.087 [2024-07-26 22:58:08.585768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.585781] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.585806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.585821] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.585834] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.585847] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.585860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.585873] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.585886] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.585899] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.585912] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.585924] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.585937] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.585950] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.585963] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.585976] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.585989] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.586001] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.586014] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.586027] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.586040] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.586054] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.586079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.586095] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.586108] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.586122] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.586134] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.586147] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.586160] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.586176] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.586190] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.586203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.586215] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.586228] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.586241] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.586254] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.586267] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.586279] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.586292] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.586305] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.586318] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.586331] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.586352] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.586365] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.586378] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.586392] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.586405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.586419] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.586431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ceac0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.587172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.357 [2024-07-26 22:58:08.587224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.357 [2024-07-26 22:58:08.587242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.357 [2024-07-26 22:58:08.587256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.357 [2024-07-26 22:58:08.587271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.357 [2024-07-26 22:58:08.587284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.357 [2024-07-26 22:58:08.587298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.357 [2024-07-26 22:58:08.587317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.357 [2024-07-26 22:58:08.587332] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8cec0 is same with the state(5) to be set 00:28:16.357 [2024-07-26 22:58:08.587396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.357 [2024-07-26 22:58:08.587416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.357 [2024-07-26 22:58:08.587432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.357 [2024-07-26 22:58:08.587445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.357 [2024-07-26 22:58:08.587459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.357 [2024-07-26 22:58:08.587484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.357 [2024-07-26 22:58:08.587498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.357 [2024-07-26 22:58:08.587512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.358 [2024-07-26 22:58:08.587526] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbde0 is same with the state(5) to be set 00:28:16.358 [2024-07-26 22:58:08.587572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.358 [2024-07-26 22:58:08.587594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.358 [2024-07-26 22:58:08.587610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.358 [2024-07-26 22:58:08.587624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.358 [2024-07-26 22:58:08.587639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.358 [2024-07-26 22:58:08.587652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.358 [2024-07-26 22:58:08.587667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.358 [2024-07-26 22:58:08.587680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.358 [2024-07-26 22:58:08.587693] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd55110 is same with the state(5) to be set 00:28:16.358 [2024-07-26 22:58:08.587740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.358 [2024-07-26 22:58:08.587762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.358 [2024-07-26 22:58:08.587787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.358 [2024-07-26 22:58:08.587811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.358 [2024-07-26 22:58:08.587836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.358 [2024-07-26 22:58:08.587862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.358 [2024-07-26 22:58:08.587887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.358 [2024-07-26 22:58:08.587902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.358 [2024-07-26 22:58:08.587915] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd4400 is same with the state(5) to be set 00:28:16.358 [2024-07-26 22:58:08.587965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.358 [2024-07-26 22:58:08.587986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.358 [2024-07-26 22:58:08.588002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.358 [2024-07-26 22:58:08.588016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.358 [2024-07-26 22:58:08.588030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.358 [2024-07-26 22:58:08.588049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.358 [2024-07-26 22:58:08.588074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.358 [2024-07-26 22:58:08.588090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.358 [2024-07-26 22:58:08.588114] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdba60 is same with the state(5) to be set 00:28:16.358 [2024-07-26 22:58:08.588159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.358 [2024-07-26 22:58:08.588180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.358 [2024-07-26 22:58:08.588196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.358 [2024-07-26 22:58:08.588210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.358 [2024-07-26 22:58:08.588225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.358 [2024-07-26 22:58:08.588239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.358 [2024-07-26 22:58:08.588254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.358 [2024-07-26 22:58:08.588268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.358 [2024-07-26 22:58:08.588282] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79bdf0 is same with the state(5) to be set 00:28:16.358 [2024-07-26 22:58:08.588325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.358 [2024-07-26 22:58:08.588346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.358 [2024-07-26 22:58:08.588361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.358 [2024-07-26 22:58:08.588375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.358 [2024-07-26 22:58:08.588394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.358 [2024-07-26 22:58:08.588411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.358 [2024-07-26 22:58:08.588327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.358 [2024-07-26 22:58:08.588426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.358 [2024-07-26 22:58:08.588434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.358 [2024-07-26 22:58:08.588440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.358 [2024-07-26 22:58:08.588450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.358 [2024-07-26 22:58:08.588453] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdacc0 is same with the state(5) to be set 00:28:16.358 [2024-07-26 22:58:08.588463] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.358 [2024-07-26 22:58:08.588476] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.358 [2024-07-26 22:58:08.588488] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.358 [2024-07-26 22:58:08.588500] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.358 [2024-07-26 22:58:08.588499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.358 [2024-07-26 22:58:08.588513] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.358 [2024-07-26 22:58:08.588521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.358 [2024-07-26 22:58:08.588526] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.358 [2024-07-26 22:58:08.588537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-26 22:58:08.588538] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with id:0 cdw10:00000000 cdw11:00000000 00:28:16.358 the state(5) to be set 00:28:16.358 [2024-07-26 22:58:08.588553] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with [2024-07-26 22:58:08.588553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:28:16.358 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.358 [2024-07-26 22:58:08.588568] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.358 [2024-07-26 22:58:08.588571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.358 [2024-07-26 22:58:08.588582] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.358 [2024-07-26 22:58:08.588585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.358 [2024-07-26 22:58:08.588596] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.358 [2024-07-26 22:58:08.588600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.358 [2024-07-26 22:58:08.588608] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.358 [2024-07-26 22:58:08.588614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.358 [2024-07-26 22:58:08.588626] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.358 [2024-07-26 22:58:08.588629] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf9f30 is same with the state(5) to be set 00:28:16.358 [2024-07-26 22:58:08.588639] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.358 [2024-07-26 22:58:08.588652] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.358 [2024-07-26 22:58:08.588664] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.358 [2024-07-26 22:58:08.588677] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with [2024-07-26 22:58:08.588674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(5) to be set 00:28:16.358 id:0 cdw10:00000000 cdw11:00000000 00:28:16.358 [2024-07-26 22:58:08.588692] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.358 [2024-07-26 22:58:08.588696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.358 [2024-07-26 22:58:08.588705] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.358 [2024-07-26 22:58:08.588712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.359 [2024-07-26 22:58:08.588717] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.588726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.359 [2024-07-26 22:58:08.588730] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.588741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-07-26 22:58:08.588743] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with id:0 cdw10:00000000 cdw11:00000000 00:28:16.359 the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.588756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-26 22:58:08.588757] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.359 the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.588773] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with [2024-07-26 22:58:08.588773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(5) to be set 00:28:16.359 id:0 cdw10:00000000 cdw11:00000000 00:28:16.359 [2024-07-26 22:58:08.588787] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.588788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.359 [2024-07-26 22:58:08.588800] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.588803] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ca610 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.588813] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.588829] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.588842] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.588855] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.588868] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.588881] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.588893] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.588906] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.588918] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.588930] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.588942] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.588954] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.588967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.588979] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.588992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.589004] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.589016] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.589130] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.589150] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.589162] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.589175] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.589187] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.589200] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.589212] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.589224] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.589236] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.589249] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.589261] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.589277] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.589289] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.589302] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.589314] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.589326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.589347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf400 is same with the state(5) to be set 00:28:16.359 [2024-07-26 22:58:08.589922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.359 [2024-07-26 22:58:08.589949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.359 [2024-07-26 22:58:08.589974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.359 [2024-07-26 22:58:08.589990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.359 [2024-07-26 22:58:08.590007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.359 [2024-07-26 22:58:08.590022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.359 [2024-07-26 22:58:08.590038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.359 [2024-07-26 22:58:08.590072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.359 [2024-07-26 22:58:08.590090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.359 [2024-07-26 22:58:08.590104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.359 [2024-07-26 22:58:08.590121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.359 [2024-07-26 22:58:08.590135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.359 [2024-07-26 22:58:08.590151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.359 [2024-07-26 22:58:08.590165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.359 [2024-07-26 22:58:08.590180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.359 [2024-07-26 22:58:08.590194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.359 [2024-07-26 22:58:08.590210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.359 [2024-07-26 22:58:08.590224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.359 [2024-07-26 22:58:08.590239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.359 [2024-07-26 22:58:08.590253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.359 [2024-07-26 22:58:08.590274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.359 [2024-07-26 22:58:08.590291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.359 [2024-07-26 22:58:08.590307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.359 [2024-07-26 22:58:08.590322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.359 [2024-07-26 22:58:08.590337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.359 [2024-07-26 22:58:08.590352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.359 [2024-07-26 22:58:08.590368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.359 [2024-07-26 22:58:08.590382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.359 [2024-07-26 22:58:08.590398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.359 [2024-07-26 22:58:08.590412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.359 [2024-07-26 22:58:08.590427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.359 [2024-07-26 22:58:08.590441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.590457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.590472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.590487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.590501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.590517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.590532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.590548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.590562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.590578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.590591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.590607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.590621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.590636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.590654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.590670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.590684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.590700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.590714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.590729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.590743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.590759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.590788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.590804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.590817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.590832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.590846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.590861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.590875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.590890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.590904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.590919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.590933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.590948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.590961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.590976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.590990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.591005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.591018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.591036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.591089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.591111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.591126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.591142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.591156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.591171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.591185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.591204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.591218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.591234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.591249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.591265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.591279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.591295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.591309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.591325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.591339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.591355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.591377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.591393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.591408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.591425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.591439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.591455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.591474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.591490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.591506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.591522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.591537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.591553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.591568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.591584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.591598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.591615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.591629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.591645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.591659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.360 [2024-07-26 22:58:08.591675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.360 [2024-07-26 22:58:08.591689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.361 [2024-07-26 22:58:08.591706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.361 [2024-07-26 22:58:08.591734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.361 [2024-07-26 22:58:08.591751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.361 [2024-07-26 22:58:08.591765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.361 [2024-07-26 22:58:08.591781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.361 [2024-07-26 22:58:08.591795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.361 [2024-07-26 22:58:08.591810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.361 [2024-07-26 22:58:08.591824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.361 [2024-07-26 22:58:08.591855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.361 [2024-07-26 22:58:08.591870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.361 [2024-07-26 22:58:08.591891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.361 [2024-07-26 22:58:08.591906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.361 [2024-07-26 22:58:08.591923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.361 [2024-07-26 22:58:08.591937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.361 [2024-07-26 22:58:08.591954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.361 [2024-07-26 22:58:08.591968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.361 [2024-07-26 22:58:08.591984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.361 [2024-07-26 22:58:08.591999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.361 [2024-07-26 22:58:08.592036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.361 [2024-07-26 22:58:08.592135] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc7b9e0 was disconnected and freed. reset controller. 00:28:16.361 [2024-07-26 22:58:08.592795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.361 [2024-07-26 22:58:08.592819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.361 [2024-07-26 22:58:08.592840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.361 [2024-07-26 22:58:08.592856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.361 [2024-07-26 22:58:08.592872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.361 [2024-07-26 22:58:08.592887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.361 [2024-07-26 22:58:08.592903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.361 [2024-07-26 22:58:08.592917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.361 [2024-07-26 22:58:08.592933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.361 [2024-07-26 22:58:08.592948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.361 [2024-07-26 22:58:08.592964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.361 [2024-07-26 22:58:08.592978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.361 [2024-07-26 22:58:08.592994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.361 [2024-07-26 22:58:08.593008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.361 [2024-07-26 22:58:08.593023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.361 [2024-07-26 22:58:08.593054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.361 [2024-07-26 22:58:08.593080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.361 [2024-07-26 22:58:08.593095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.361 [2024-07-26 22:58:08.593111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.361 [2024-07-26 22:58:08.593126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.361 [2024-07-26 22:58:08.593142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.361 [2024-07-26 22:58:08.593157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.361 [2024-07-26 22:58:08.593173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.361 [2024-07-26 22:58:08.593188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.361 [2024-07-26 22:58:08.593203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.361 [2024-07-26 22:58:08.593219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.361 [2024-07-26 22:58:08.593235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.361 [2024-07-26 22:58:08.593249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.361 [2024-07-26 22:58:08.593265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.361 [2024-07-26 22:58:08.593279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.361 [2024-07-26 22:58:08.593295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.361 [2024-07-26 22:58:08.593309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.361 [2024-07-26 22:58:08.593326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.361 [2024-07-26 22:58:08.593351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.361 [2024-07-26 22:58:08.593367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.361 [2024-07-26 22:58:08.593381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.361 [2024-07-26 22:58:08.593397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.361 [2024-07-26 22:58:08.593411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.361 [2024-07-26 22:58:08.593426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.361 [2024-07-26 22:58:08.593440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.361 [2024-07-26 22:58:08.593463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.361 [2024-07-26 22:58:08.593478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.361 [2024-07-26 22:58:08.593495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.593509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.593524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.593538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.593554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.593568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.593584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.593598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.593613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.593628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.593644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.593658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.593673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.593688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.593704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.593718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.593735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.593748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.593764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.593777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.593793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.593809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.593824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.593842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.593858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.593873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.593889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.593903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.593919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.593933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.593949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.593963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.593979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.593993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.594009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.594023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.594039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.594056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.594079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.594094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.594110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.594124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.594140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.594154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.594170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.594184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.594202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.594216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.594236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.594250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.594266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.594281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.594297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.594311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.594326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.594341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.594357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.594370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.594386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.594400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.594416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.594430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.594446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.594460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.594476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.594489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.594505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.594520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.594535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.594549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.594565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.594579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.594595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.594612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.594629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.594643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.594659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.362 [2024-07-26 22:58:08.594673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.362 [2024-07-26 22:58:08.594690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.594704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.594720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.594734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.594750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.594764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.594780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.594794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.594825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.363 [2024-07-26 22:58:08.594893] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc7f930 was disconnected and freed. reset controller. 00:28:16.363 [2024-07-26 22:58:08.595426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.595451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.595474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.595490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.595507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.595521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.595537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.595552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.595567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.595582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.595602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.595617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.595633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.595648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.595664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.595678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.595694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.595708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.595724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.595739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.595755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.595769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.595785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.595800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.595816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.595830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.595847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.595860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.595877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.595891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.595909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.595923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.595940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.595954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.595971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.595989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.596006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.596021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.596037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.596051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.596075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.596091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.596108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.596122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.596139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.596153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.596169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.596184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.596201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.596215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.596232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.596246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.596262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.596277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.596294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.596309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.596326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.596339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.596356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.596370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.596390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.596405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.596421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.596436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.596452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.596466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.596483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.596497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.363 [2024-07-26 22:58:08.596513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-26 22:58:08.596528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.596545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.596559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.596575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.596590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.596606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.596621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.596637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.596651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.596667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.596682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.596698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.596713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.596729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.596743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.596760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.596778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.596796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.596812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.596828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.596843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.596859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.596874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.596891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.596905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.596921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.596935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.596951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.596966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.596982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.596997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.597014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.597028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.597044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.597066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.597084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.597098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.597115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.597136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.597151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.597166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.597186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.597201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.597217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.597232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.597248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.597263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.597279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.597294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.597311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.597325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.597342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.597356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.597371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.597385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.597401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.597416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.597432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.597446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.597532] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc7a470 was disconnected and freed. reset controller. 00:28:16.364 [2024-07-26 22:58:08.600609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.600641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.600665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.600682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.600699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.600714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.600737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.600752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.600768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.600783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.600799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.600813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.600830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.600844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.600860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.600874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.600890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.364 [2024-07-26 22:58:08.600905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.364 [2024-07-26 22:58:08.600922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.600936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.600952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.600967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.600983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.600998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.601014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.601028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.601044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.601065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.601083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.601097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.601113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.601132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.601148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.601163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.601179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.601194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.601210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.601224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.601240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.601254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.601271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.601284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.601300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.601314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.601330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.601344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.601361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.601375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.601391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.601405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.601421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.601435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.601451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.601465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.601482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.601496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.601516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.601531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.601547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.601561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.601578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.601592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.601609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.601623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.601639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.601653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.601670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.601684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.601700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.601714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.601730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.601744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.601760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.601774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.601790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.601804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.601820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.601834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.601850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.601865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.601880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.601898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.601915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.601929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.601945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.601960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.601977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.601992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.602008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.602022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.602038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.602052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.602076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.602092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.602108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-07-26 22:58:08.602122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.365 [2024-07-26 22:58:08.602138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.366 [2024-07-26 22:58:08.602151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.366 [2024-07-26 22:58:08.602168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.366 [2024-07-26 22:58:08.602182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.366 [2024-07-26 22:58:08.602198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.366 [2024-07-26 22:58:08.602213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.366 [2024-07-26 22:58:08.602229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.366 [2024-07-26 22:58:08.602244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.366 [2024-07-26 22:58:08.602260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.366 [2024-07-26 22:58:08.602274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.366 [2024-07-26 22:58:08.602294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.366 [2024-07-26 22:58:08.602309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.366 [2024-07-26 22:58:08.602326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.366 [2024-07-26 22:58:08.602340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.366 [2024-07-26 22:58:08.602356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.366 [2024-07-26 22:58:08.602370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.366 [2024-07-26 22:58:08.602386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.366 [2024-07-26 22:58:08.602400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.366 [2024-07-26 22:58:08.602416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.366 [2024-07-26 22:58:08.602430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.366 [2024-07-26 22:58:08.602447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.366 [2024-07-26 22:58:08.602461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.366 [2024-07-26 22:58:08.602478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.366 [2024-07-26 22:58:08.602493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.366 [2024-07-26 22:58:08.602510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.366 [2024-07-26 22:58:08.602524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.366 [2024-07-26 22:58:08.602540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.366 [2024-07-26 22:58:08.602553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.366 [2024-07-26 22:58:08.602570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.366 [2024-07-26 22:58:08.602584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.366 [2024-07-26 22:58:08.602600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.366 [2024-07-26 22:58:08.602614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.366 [2024-07-26 22:58:08.603156] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd3ae60 was disconnected and freed. reset controller. 00:28:16.366 [2024-07-26 22:58:08.603210] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:16.366 [2024-07-26 22:58:08.603258] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdacc0 (9): Bad file descriptor 00:28:16.366 [2024-07-26 22:58:08.603323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.366 [2024-07-26 22:58:08.603344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.366 [2024-07-26 22:58:08.603361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.366 [2024-07-26 22:58:08.603375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.366 [2024-07-26 22:58:08.603390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.366 [2024-07-26 22:58:08.603404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.366 [2024-07-26 22:58:08.603418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.366 [2024-07-26 22:58:08.603432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.366 [2024-07-26 22:58:08.603446] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76cf0 is same with the state(5) to be set 00:28:16.366 [2024-07-26 22:58:08.603481] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8cec0 (9): Bad file descriptor 00:28:16.366 [2024-07-26 22:58:08.603507] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbfbde0 (9): Bad file descriptor 00:28:16.366 [2024-07-26 22:58:08.603537] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd55110 (9): Bad file descriptor 00:28:16.366 [2024-07-26 22:58:08.603562] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd4400 (9): Bad file descriptor 00:28:16.366 [2024-07-26 22:58:08.603590] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdba60 (9): Bad file descriptor 00:28:16.366 [2024-07-26 22:58:08.603615] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x79bdf0 (9): Bad file descriptor 00:28:16.366 [2024-07-26 22:58:08.603643] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf9f30 (9): Bad file descriptor 00:28:16.366 [2024-07-26 22:58:08.603673] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ca610 (9): Bad file descriptor 00:28:16.366 [2024-07-26 22:58:08.605332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.366 [2024-07-26 22:58:08.605368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.366 [2024-07-26 22:58:08.605404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.366 [2024-07-26 22:58:08.605431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.366 [2024-07-26 22:58:08.605461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.366 [2024-07-26 22:58:08.605486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.366 [2024-07-26 22:58:08.605516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.366 [2024-07-26 22:58:08.605542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.366 [2024-07-26 22:58:08.605561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.366 [2024-07-26 22:58:08.605581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.366 [2024-07-26 22:58:08.605599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.366 [2024-07-26 22:58:08.605614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.366 [2024-07-26 22:58:08.605630] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7cf30 is same with the state(5) to be set 00:28:16.366 [2024-07-26 22:58:08.605703] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc7cf30 was disconnected and freed. reset controller. 00:28:16.366 [2024-07-26 22:58:08.606953] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:16.366 [2024-07-26 22:58:08.608721] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:16.366 [2024-07-26 22:58:08.608755] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:16.366 [2024-07-26 22:58:08.608779] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:16.366 [2024-07-26 22:58:08.608996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.367 [2024-07-26 22:58:08.609027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdacc0 with addr=10.0.0.2, port=4420 00:28:16.367 [2024-07-26 22:58:08.609046] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdacc0 is same with the state(5) to be set 00:28:16.367 [2024-07-26 22:58:08.609282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.367 [2024-07-26 22:58:08.609310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdba60 with addr=10.0.0.2, port=4420 00:28:16.367 [2024-07-26 22:58:08.609327] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdba60 is same with the state(5) to be set 00:28:16.367 [2024-07-26 22:58:08.609412] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:16.367 [2024-07-26 22:58:08.609489] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:16.367 [2024-07-26 22:58:08.609835] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:16.367 [2024-07-26 22:58:08.609923] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:16.367 [2024-07-26 22:58:08.610002] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:16.367 [2024-07-26 22:58:08.610472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.367 [2024-07-26 22:58:08.610501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd55110 with addr=10.0.0.2, port=4420 00:28:16.367 [2024-07-26 22:58:08.610522] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd55110 is same with the state(5) to be set 00:28:16.367 [2024-07-26 22:58:08.610660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.367 [2024-07-26 22:58:08.610687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8cec0 with addr=10.0.0.2, port=4420 00:28:16.367 [2024-07-26 22:58:08.610703] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8cec0 is same with the state(5) to be set 00:28:16.367 [2024-07-26 22:58:08.610830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.367 [2024-07-26 22:58:08.610856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbfbde0 with addr=10.0.0.2, port=4420 00:28:16.367 [2024-07-26 22:58:08.610873] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbde0 is same with the state(5) to be set 00:28:16.367 [2024-07-26 22:58:08.610896] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdacc0 (9): Bad file descriptor 00:28:16.367 [2024-07-26 22:58:08.610917] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdba60 (9): Bad file descriptor 00:28:16.367 [2024-07-26 22:58:08.611336] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd55110 (9): Bad file descriptor 00:28:16.367 [2024-07-26 22:58:08.611366] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8cec0 (9): Bad file descriptor 00:28:16.367 [2024-07-26 22:58:08.611385] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbfbde0 (9): Bad file descriptor 00:28:16.367 [2024-07-26 22:58:08.611403] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:16.367 [2024-07-26 22:58:08.611416] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:16.367 [2024-07-26 22:58:08.611433] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:16.367 [2024-07-26 22:58:08.611455] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:16.367 [2024-07-26 22:58:08.611470] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:16.367 [2024-07-26 22:58:08.611483] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:16.367 [2024-07-26 22:58:08.611558] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:16.367 [2024-07-26 22:58:08.611579] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:16.367 [2024-07-26 22:58:08.611592] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:16.367 [2024-07-26 22:58:08.611605] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:16.367 [2024-07-26 22:58:08.611619] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:16.367 [2024-07-26 22:58:08.611638] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:16.367 [2024-07-26 22:58:08.611652] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:16.367 [2024-07-26 22:58:08.611666] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:16.367 [2024-07-26 22:58:08.611684] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:16.367 [2024-07-26 22:58:08.611698] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:16.367 [2024-07-26 22:58:08.611711] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:16.367 [2024-07-26 22:58:08.611767] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:16.367 [2024-07-26 22:58:08.611785] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:16.367 [2024-07-26 22:58:08.611798] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:16.367 [2024-07-26 22:58:08.613241] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76cf0 (9): Bad file descriptor 00:28:16.367 [2024-07-26 22:58:08.613416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.367 [2024-07-26 22:58:08.613443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.367 [2024-07-26 22:58:08.613472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.367 [2024-07-26 22:58:08.613489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.367 [2024-07-26 22:58:08.613506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.367 [2024-07-26 22:58:08.613526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.367 [2024-07-26 22:58:08.613544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.367 [2024-07-26 22:58:08.613559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.367 [2024-07-26 22:58:08.613575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.367 [2024-07-26 22:58:08.613590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.367 [2024-07-26 22:58:08.613606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.367 [2024-07-26 22:58:08.613620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.367 [2024-07-26 22:58:08.613637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.367 [2024-07-26 22:58:08.613651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.367 [2024-07-26 22:58:08.613667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.367 [2024-07-26 22:58:08.613682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.367 [2024-07-26 22:58:08.613698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.367 [2024-07-26 22:58:08.613713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.367 [2024-07-26 22:58:08.613729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.367 [2024-07-26 22:58:08.613744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.367 [2024-07-26 22:58:08.613760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.367 [2024-07-26 22:58:08.613775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.367 [2024-07-26 22:58:08.613791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.367 [2024-07-26 22:58:08.613805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.367 [2024-07-26 22:58:08.613821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.367 [2024-07-26 22:58:08.613835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.367 [2024-07-26 22:58:08.613852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.367 [2024-07-26 22:58:08.613866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.367 [2024-07-26 22:58:08.613882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.367 [2024-07-26 22:58:08.613896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.367 [2024-07-26 22:58:08.613916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.367 [2024-07-26 22:58:08.613932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.367 [2024-07-26 22:58:08.613948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.367 [2024-07-26 22:58:08.613962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.367 [2024-07-26 22:58:08.613978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.367 [2024-07-26 22:58:08.613992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.367 [2024-07-26 22:58:08.614009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.367 [2024-07-26 22:58:08.614023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.367 [2024-07-26 22:58:08.614039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.367 [2024-07-26 22:58:08.614053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.614079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.614094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.614110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.614125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.614141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.614155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.614171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.614185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.614201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.614216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.614232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.614246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.614262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.614276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.614292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.614310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.614328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.614343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.614359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.614373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.614389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.614404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.614420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.614434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.614450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.614465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.614481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.614496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.614513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.614527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.614543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.614557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.614573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.614588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.614605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.614619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.614635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.614650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.614666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.614680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.614697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.614715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.614733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.614747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.614763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.614777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.614793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.614807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.614824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.614838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.614855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.614869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.614886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.614900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.614916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.614930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.614946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.614962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.614978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.614993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.615009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.615023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.615039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.615053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.615077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.615092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.615112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.615127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.615144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.615158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.615175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.615189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.615206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.615220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.615237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.615252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.615268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.368 [2024-07-26 22:58:08.615282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.368 [2024-07-26 22:58:08.615298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.615313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.615329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.615343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.615360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.615374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.615390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.615405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.615421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.615435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.615450] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79eff0 is same with the state(5) to be set 00:28:16.369 [2024-07-26 22:58:08.616723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.616747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.616773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.616790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.616807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.616822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.616839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.616853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.616869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.616884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.616900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.616914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.616931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.616945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.616961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.616976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.616992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.617006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.617023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.617037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.617054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.617075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.617091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.617106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.617122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.617137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.617153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.617172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.617188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.617203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.617220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.617234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.617251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.617265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.617282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.617296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.617312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.617326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.617343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.617357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.617373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.617388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.617404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.617418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.617434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.617449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.617465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.617479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.617495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.617510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.617526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.617541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.617561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.617576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.617592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.617606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.617623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.617638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.617655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.617669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.617685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.617699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.617716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.617730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.617746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.617761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-07-26 22:58:08.617777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-07-26 22:58:08.617791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-07-26 22:58:08.617808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-07-26 22:58:08.617822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-07-26 22:58:08.617839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-07-26 22:58:08.617853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-07-26 22:58:08.617869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-07-26 22:58:08.617884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-07-26 22:58:08.617900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-07-26 22:58:08.617914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-07-26 22:58:08.617930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-07-26 22:58:08.617949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-07-26 22:58:08.617966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-07-26 22:58:08.617981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-07-26 22:58:08.617998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-07-26 22:58:08.618012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-07-26 22:58:08.618029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-07-26 22:58:08.618044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-07-26 22:58:08.618065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-07-26 22:58:08.618081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-07-26 22:58:08.618097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-07-26 22:58:08.618112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-07-26 22:58:08.618128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-07-26 22:58:08.618142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-07-26 22:58:08.618158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-07-26 22:58:08.618172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-07-26 22:58:08.618188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-07-26 22:58:08.618202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-07-26 22:58:08.618219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-07-26 22:58:08.618234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-07-26 22:58:08.618251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-07-26 22:58:08.618265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-07-26 22:58:08.618281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-07-26 22:58:08.618295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-07-26 22:58:08.618311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-07-26 22:58:08.618326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-07-26 22:58:08.618345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-07-26 22:58:08.618361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-07-26 22:58:08.618377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-07-26 22:58:08.618391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-07-26 22:58:08.618407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-07-26 22:58:08.618421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-07-26 22:58:08.618438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-07-26 22:58:08.618453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-07-26 22:58:08.618469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-07-26 22:58:08.618483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-07-26 22:58:08.618500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-07-26 22:58:08.618514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-07-26 22:58:08.618531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-07-26 22:58:08.618545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-07-26 22:58:08.618561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-07-26 22:58:08.618575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-07-26 22:58:08.618591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-07-26 22:58:08.618606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-07-26 22:58:08.618622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-07-26 22:58:08.618636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-07-26 22:58:08.618653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-07-26 22:58:08.618666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-07-26 22:58:08.618683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-07-26 22:58:08.618697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-07-26 22:58:08.618713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-07-26 22:58:08.618732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-07-26 22:58:08.618747] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a0320 is same with the state(5) to be set 00:28:16.370 [2024-07-26 22:58:08.619979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-07-26 22:58:08.620002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-07-26 22:58:08.620023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-07-26 22:58:08.620039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-07-26 22:58:08.620056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-07-26 22:58:08.620092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-07-26 22:58:08.620109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.620124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.620140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.620154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.620171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.620186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.620203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.620217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.620233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.620248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.620264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.620278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.620295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.620310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.620326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.620340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.620357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.620380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.620396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.620411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.620427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.620441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.620458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.620472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.620488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.620502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.620518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.620532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.620549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.620563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.620580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.620594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.620610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.620624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.620640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.620655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.620671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.620685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.620701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.620715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.620731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.620745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.620765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.620780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.620796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.620810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.620826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.620840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.620856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.620870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.620886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.620900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.620916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.620931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.620947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.620962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.620978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.620992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.621009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.621022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.621039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.621053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.621077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.621093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.621109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.621124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.621140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.621159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.621175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.621190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.621207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.621222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.621238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.621253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.621269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.621283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.621300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.621315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-07-26 22:58:08.621331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-07-26 22:58:08.621346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.621362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.621376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.621393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.621407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.621423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.621437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.621453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.621468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.621484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.621498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.621514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.621528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.621548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.621564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.621580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.621595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.621611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.621625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.621643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.621657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.621673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.621687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.621703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.621717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.621734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.621748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.621764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.621778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.621794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.621809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.621826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.621840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.621856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.621869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.621886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.621900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.621916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.621934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.621950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.621965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.621981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.621995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.622010] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbafef0 is same with the state(5) to be set 00:28:16.372 [2024-07-26 22:58:08.623459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.623486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.623508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.623524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.623541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.623556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.623573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.623588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.623604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.623618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.623635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.623649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.623665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.623680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.623696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.623710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.623726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.623740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.623757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.623771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.623792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.623808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.623824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.623838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.623854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.623869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.623885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.623899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.623915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.623929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.623946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.623961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.623977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.372 [2024-07-26 22:58:08.623991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.372 [2024-07-26 22:58:08.624007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.624021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.624038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.624052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.624082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.624098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.624114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.624129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.624145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.624159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.624176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.624195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.624212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.624226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.624243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.624257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.624274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.624288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.624304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.624318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.624335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.624350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.624366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.624380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.624397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.624412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.624428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.624443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.624459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.624474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.624491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.624505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.624521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.624536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.624552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.624567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.624587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.624602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.624618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.624633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.624649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.624665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.624681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.624695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.624712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.624727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.624743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.624757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.624773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.624788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.624804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.624820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.624837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.624851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.624867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.624881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.624897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.624912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.624928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.624942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.624958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.624976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.624993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.625007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.625024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.625038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.625054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.625077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.625094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.625109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.625125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.625139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.625156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.625170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.625186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.625201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.625217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.373 [2024-07-26 22:58:08.625231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.373 [2024-07-26 22:58:08.625247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.374 [2024-07-26 22:58:08.625261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.374 [2024-07-26 22:58:08.625278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.374 [2024-07-26 22:58:08.625292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.374 [2024-07-26 22:58:08.625308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.374 [2024-07-26 22:58:08.625323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.374 [2024-07-26 22:58:08.625340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.374 [2024-07-26 22:58:08.625354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.374 [2024-07-26 22:58:08.625374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.374 [2024-07-26 22:58:08.625390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.374 [2024-07-26 22:58:08.625406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.374 [2024-07-26 22:58:08.625421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.374 [2024-07-26 22:58:08.625437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.374 [2024-07-26 22:58:08.625451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.374 [2024-07-26 22:58:08.625467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.374 [2024-07-26 22:58:08.625482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.374 [2024-07-26 22:58:08.625497] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7e430 is same with the state(5) to be set 00:28:16.374 [2024-07-26 22:58:08.626745] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:16.374 [2024-07-26 22:58:08.626779] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:16.374 [2024-07-26 22:58:08.626798] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:16.374 [2024-07-26 22:58:08.626818] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:28:16.374 [2024-07-26 22:58:08.627330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.374 [2024-07-26 22:58:08.627364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bdf0 with addr=10.0.0.2, port=4420 00:28:16.374 [2024-07-26 22:58:08.627387] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79bdf0 is same with the state(5) to be set 00:28:16.374 [2024-07-26 22:58:08.627561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.374 [2024-07-26 22:58:08.627588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd4400 with addr=10.0.0.2, port=4420 00:28:16.374 [2024-07-26 22:58:08.627605] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd4400 is same with the state(5) to be set 00:28:16.374 [2024-07-26 22:58:08.627747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.374 [2024-07-26 22:58:08.627773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf9f30 with addr=10.0.0.2, port=4420 00:28:16.374 [2024-07-26 22:58:08.627790] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf9f30 is same with the state(5) to be set 00:28:16.374 [2024-07-26 22:58:08.627984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.374 [2024-07-26 22:58:08.628010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ca610 with addr=10.0.0.2, port=4420 00:28:16.374 [2024-07-26 22:58:08.628027] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ca610 is same with the state(5) to be set 00:28:16.374 [2024-07-26 22:58:08.629120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.374 [2024-07-26 22:58:08.629145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.374 [2024-07-26 22:58:08.629173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.374 [2024-07-26 22:58:08.629195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.374 [2024-07-26 22:58:08.629213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.374 [2024-07-26 22:58:08.629228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.374 [2024-07-26 22:58:08.629244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.374 [2024-07-26 22:58:08.629259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.374 [2024-07-26 22:58:08.629275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.374 [2024-07-26 22:58:08.629289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.374 [2024-07-26 22:58:08.629306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.374 [2024-07-26 22:58:08.629320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.374 [2024-07-26 22:58:08.629337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.374 [2024-07-26 22:58:08.629351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.374 [2024-07-26 22:58:08.629368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.374 [2024-07-26 22:58:08.629382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.374 [2024-07-26 22:58:08.629399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.374 [2024-07-26 22:58:08.629413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.374 [2024-07-26 22:58:08.629430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.374 [2024-07-26 22:58:08.629444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.374 [2024-07-26 22:58:08.629461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.374 [2024-07-26 22:58:08.629475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.374 [2024-07-26 22:58:08.629491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.374 [2024-07-26 22:58:08.629505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.374 [2024-07-26 22:58:08.629521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.374 [2024-07-26 22:58:08.629535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.374 [2024-07-26 22:58:08.629552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.374 [2024-07-26 22:58:08.629566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.374 [2024-07-26 22:58:08.629587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.374 [2024-07-26 22:58:08.629601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.374 [2024-07-26 22:58:08.629618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.374 [2024-07-26 22:58:08.629632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.374 [2024-07-26 22:58:08.629649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.374 [2024-07-26 22:58:08.629663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.374 [2024-07-26 22:58:08.629680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.374 [2024-07-26 22:58:08.629694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.374 [2024-07-26 22:58:08.629711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.374 [2024-07-26 22:58:08.629725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.374 [2024-07-26 22:58:08.629742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.374 [2024-07-26 22:58:08.629756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.374 [2024-07-26 22:58:08.629772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.374 [2024-07-26 22:58:08.629787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.374 [2024-07-26 22:58:08.629803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.374 [2024-07-26 22:58:08.629817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.374 [2024-07-26 22:58:08.629834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.374 [2024-07-26 22:58:08.629848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.374 [2024-07-26 22:58:08.629864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.374 [2024-07-26 22:58:08.629879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.374 [2024-07-26 22:58:08.629895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.375 [2024-07-26 22:58:08.629909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.375 [2024-07-26 22:58:08.629926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.375 [2024-07-26 22:58:08.629940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.375 [2024-07-26 22:58:08.629957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.375 [2024-07-26 22:58:08.629976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.375 [2024-07-26 22:58:08.629993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.375 [2024-07-26 22:58:08.630007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.375 [2024-07-26 22:58:08.630024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.375 [2024-07-26 22:58:08.630039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.375 [2024-07-26 22:58:08.630056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.375 [2024-07-26 22:58:08.630078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.375 [2024-07-26 22:58:08.630095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.375 [2024-07-26 22:58:08.630109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.375 [2024-07-26 22:58:08.630126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.375 [2024-07-26 22:58:08.630140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.375 [2024-07-26 22:58:08.630157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.375 [2024-07-26 22:58:08.630171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.375 [2024-07-26 22:58:08.630188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.375 [2024-07-26 22:58:08.630202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.375 [2024-07-26 22:58:08.630219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.375 [2024-07-26 22:58:08.630234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.375 [2024-07-26 22:58:08.630250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.375 [2024-07-26 22:58:08.630264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.375 [2024-07-26 22:58:08.630281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.375 [2024-07-26 22:58:08.630295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.375 [2024-07-26 22:58:08.630312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.375 [2024-07-26 22:58:08.630326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.375 [2024-07-26 22:58:08.630343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.375 [2024-07-26 22:58:08.630357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.375 [2024-07-26 22:58:08.630380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.375 [2024-07-26 22:58:08.630396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.375 [2024-07-26 22:58:08.630412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.375 [2024-07-26 22:58:08.630427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.375 [2024-07-26 22:58:08.630443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.375 [2024-07-26 22:58:08.630458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.375 [2024-07-26 22:58:08.630474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.375 [2024-07-26 22:58:08.630489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.375 [2024-07-26 22:58:08.630505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.375 [2024-07-26 22:58:08.630519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.375 [2024-07-26 22:58:08.630536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.375 [2024-07-26 22:58:08.630550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.375 [2024-07-26 22:58:08.630566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.375 [2024-07-26 22:58:08.630580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.375 [2024-07-26 22:58:08.630596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.375 [2024-07-26 22:58:08.630610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.375 [2024-07-26 22:58:08.630626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.375 [2024-07-26 22:58:08.630640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.375 [2024-07-26 22:58:08.630656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.375 [2024-07-26 22:58:08.630670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.375 [2024-07-26 22:58:08.630686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.375 [2024-07-26 22:58:08.630700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.375 [2024-07-26 22:58:08.630717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.375 [2024-07-26 22:58:08.630731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.375 [2024-07-26 22:58:08.630747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.375 [2024-07-26 22:58:08.630765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.375 [2024-07-26 22:58:08.630782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.375 [2024-07-26 22:58:08.630796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.375 [2024-07-26 22:58:08.630812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.375 [2024-07-26 22:58:08.630827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.375 [2024-07-26 22:58:08.630843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.375 [2024-07-26 22:58:08.630858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.375 [2024-07-26 22:58:08.630874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.375 [2024-07-26 22:58:08.630888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.375 [2024-07-26 22:58:08.630904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.375 [2024-07-26 22:58:08.630919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.375 [2024-07-26 22:58:08.630935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.375 [2024-07-26 22:58:08.630949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.376 [2024-07-26 22:58:08.630966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.376 [2024-07-26 22:58:08.630980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.376 [2024-07-26 22:58:08.630996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.376 [2024-07-26 22:58:08.631011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.376 [2024-07-26 22:58:08.631027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.376 [2024-07-26 22:58:08.631041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.376 [2024-07-26 22:58:08.631070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.376 [2024-07-26 22:58:08.631087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.376 [2024-07-26 22:58:08.631103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.376 [2024-07-26 22:58:08.631118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.376 [2024-07-26 22:58:08.631134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.376 [2024-07-26 22:58:08.631148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.376 [2024-07-26 22:58:08.631171] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39980 is same with the state(5) to be set 00:28:16.376 [2024-07-26 22:58:08.633254] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:16.376 [2024-07-26 22:58:08.633287] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:16.376 [2024-07-26 22:58:08.633314] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:16.376 [2024-07-26 22:58:08.633331] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:16.376 [2024-07-26 22:58:08.633348] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:16.376 task offset: 24576 on job bdev=Nvme4n1 fails 00:28:16.376 00:28:16.376 Latency(us) 00:28:16.376 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.376 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:16.376 Job: Nvme1n1 ended in about 0.88 seconds with error 00:28:16.376 Verification LBA range: start 0x0 length 0x400 00:28:16.376 Nvme1n1 : 0.88 144.98 9.06 72.49 0.00 291009.68 19029.71 256318.58 00:28:16.376 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:16.376 Job: Nvme2n1 ended in about 0.89 seconds with error 00:28:16.376 Verification LBA range: start 0x0 length 0x400 00:28:16.376 Nvme2n1 : 0.89 150.08 9.38 72.22 0.00 278664.89 20680.25 278066.82 00:28:16.376 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:16.376 Job: Nvme3n1 ended in about 0.87 seconds with error 00:28:16.376 Verification LBA range: start 0x0 length 0x400 00:28:16.376 Nvme3n1 : 0.87 146.90 9.18 73.45 0.00 274871.94 14660.65 276513.37 00:28:16.376 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:16.376 Job: Nvme4n1 ended in about 0.86 seconds with error 00:28:16.376 Verification LBA range: start 0x0 length 0x400 00:28:16.376 Nvme4n1 : 0.86 221.97 13.87 73.99 0.00 199883.85 21262.79 259425.47 00:28:16.376 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:16.376 Job: Nvme5n1 ended in about 0.89 seconds with error 00:28:16.376 Verification LBA range: start 0x0 length 0x400 00:28:16.376 Nvme5n1 : 0.89 143.88 8.99 71.94 0.00 268717.32 21942.42 271853.04 00:28:16.376 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:16.376 Job: Nvme6n1 ended in about 0.87 seconds with error 00:28:16.376 Verification LBA range: start 0x0 length 0x400 00:28:16.376 Nvme6n1 : 0.87 146.34 9.15 6.86 0.00 365672.77 44079.03 304475.40 00:28:16.376 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:16.376 Job: Nvme7n1 ended in about 0.89 seconds with error 00:28:16.376 Verification LBA range: start 0x0 length 0x400 00:28:16.376 Nvme7n1 : 0.89 147.83 9.24 71.68 0.00 252467.16 21068.61 245444.46 00:28:16.376 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:16.376 Job: Nvme8n1 ended in about 0.87 seconds with error 00:28:16.376 Verification LBA range: start 0x0 length 0x400 00:28:16.376 Nvme8n1 : 0.87 221.49 13.84 73.83 0.00 182294.38 29515.47 242337.56 00:28:16.376 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:16.376 Job: Nvme9n1 ended in about 0.90 seconds with error 00:28:16.376 Verification LBA range: start 0x0 length 0x400 00:28:16.376 Nvme9n1 : 0.90 146.90 9.18 71.22 0.00 242765.31 18835.53 253211.69 00:28:16.376 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:16.376 Job: Nvme10n1 ended in about 0.87 seconds with error 00:28:16.376 Verification LBA range: start 0x0 length 0x400 00:28:16.376 Nvme10n1 : 0.87 146.57 9.16 73.29 0.00 233528.89 19029.71 310689.19 00:28:16.376 =================================================================================================================== 00:28:16.376 Total : 1616.96 101.06 660.97 0.00 251563.09 14660.65 310689.19 00:28:16.376 [2024-07-26 22:58:08.660424] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:16.376 [2024-07-26 22:58:08.660514] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:28:16.376 [2024-07-26 22:58:08.660627] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x79bdf0 (9): Bad file descriptor 00:28:16.376 [2024-07-26 22:58:08.660661] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd4400 (9): Bad file descriptor 00:28:16.376 [2024-07-26 22:58:08.660681] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf9f30 (9): Bad file descriptor 00:28:16.376 [2024-07-26 22:58:08.660699] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ca610 (9): Bad file descriptor 00:28:16.376 [2024-07-26 22:58:08.661169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.376 [2024-07-26 22:58:08.661217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdba60 with addr=10.0.0.2, port=4420 00:28:16.376 [2024-07-26 22:58:08.661238] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdba60 is same with the state(5) to be set 00:28:16.376 [2024-07-26 22:58:08.661394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.376 [2024-07-26 22:58:08.661421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdacc0 with addr=10.0.0.2, port=4420 00:28:16.376 [2024-07-26 22:58:08.661438] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdacc0 is same with the state(5) to be set 00:28:16.376 [2024-07-26 22:58:08.661587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.376 [2024-07-26 22:58:08.661614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbfbde0 with addr=10.0.0.2, port=4420 00:28:16.376 [2024-07-26 22:58:08.661630] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbde0 is same with the state(5) to be set 00:28:16.376 [2024-07-26 22:58:08.661770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.376 [2024-07-26 22:58:08.661797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8cec0 with addr=10.0.0.2, port=4420 00:28:16.376 [2024-07-26 22:58:08.661814] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8cec0 is same with the state(5) to be set 00:28:16.376 [2024-07-26 22:58:08.661976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.376 [2024-07-26 22:58:08.662002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd55110 with addr=10.0.0.2, port=4420 00:28:16.376 [2024-07-26 22:58:08.662019] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd55110 is same with the state(5) to be set 00:28:16.376 [2024-07-26 22:58:08.662156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.376 [2024-07-26 22:58:08.662185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76cf0 with addr=10.0.0.2, port=4420 00:28:16.376 [2024-07-26 22:58:08.662201] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76cf0 is same with the state(5) to be set 00:28:16.376 [2024-07-26 22:58:08.662218] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:16.376 [2024-07-26 22:58:08.662232] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:16.376 [2024-07-26 22:58:08.662249] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:16.376 [2024-07-26 22:58:08.662270] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:16.376 [2024-07-26 22:58:08.662285] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:16.376 [2024-07-26 22:58:08.662310] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:16.376 [2024-07-26 22:58:08.662327] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:16.376 [2024-07-26 22:58:08.662341] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:16.376 [2024-07-26 22:58:08.662355] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:16.376 [2024-07-26 22:58:08.662372] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:28:16.376 [2024-07-26 22:58:08.662386] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:28:16.376 [2024-07-26 22:58:08.662399] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:28:16.376 [2024-07-26 22:58:08.662430] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:16.376 [2024-07-26 22:58:08.662453] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:16.376 [2024-07-26 22:58:08.662472] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:16.376 [2024-07-26 22:58:08.662490] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:16.376 [2024-07-26 22:58:08.662854] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:16.377 [2024-07-26 22:58:08.662879] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:16.377 [2024-07-26 22:58:08.662893] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:16.377 [2024-07-26 22:58:08.662905] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:16.377 [2024-07-26 22:58:08.662922] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdba60 (9): Bad file descriptor 00:28:16.377 [2024-07-26 22:58:08.662942] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdacc0 (9): Bad file descriptor 00:28:16.377 [2024-07-26 22:58:08.662960] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbfbde0 (9): Bad file descriptor 00:28:16.377 [2024-07-26 22:58:08.662978] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8cec0 (9): Bad file descriptor 00:28:16.377 [2024-07-26 22:58:08.662996] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd55110 (9): Bad file descriptor 00:28:16.377 [2024-07-26 22:58:08.663013] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76cf0 (9): Bad file descriptor 00:28:16.377 [2024-07-26 22:58:08.663341] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:16.377 [2024-07-26 22:58:08.663367] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:16.377 [2024-07-26 22:58:08.663382] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:16.377 [2024-07-26 22:58:08.663400] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:16.377 [2024-07-26 22:58:08.663415] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:16.377 [2024-07-26 22:58:08.663428] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:16.377 [2024-07-26 22:58:08.663445] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:16.377 [2024-07-26 22:58:08.663459] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:16.377 [2024-07-26 22:58:08.663472] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:16.377 [2024-07-26 22:58:08.663493] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:16.377 [2024-07-26 22:58:08.663508] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:16.377 [2024-07-26 22:58:08.663522] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:16.377 [2024-07-26 22:58:08.663538] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:16.377 [2024-07-26 22:58:08.663552] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:16.377 [2024-07-26 22:58:08.663566] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:16.377 [2024-07-26 22:58:08.663583] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:28:16.377 [2024-07-26 22:58:08.663597] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:28:16.377 [2024-07-26 22:58:08.663610] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:28:16.377 [2024-07-26 22:58:08.663658] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:16.377 [2024-07-26 22:58:08.663677] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:16.377 [2024-07-26 22:58:08.663690] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:16.377 [2024-07-26 22:58:08.663701] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:16.377 [2024-07-26 22:58:08.663713] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:16.377 [2024-07-26 22:58:08.663725] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:16.634 22:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:28:16.634 22:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:28:18.010 22:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3628109 00:28:18.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3628109) - No such process 00:28:18.010 22:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:28:18.010 22:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:28:18.010 22:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:18.010 22:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:18.010 22:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:18.010 22:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:18.010 22:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:18.010 22:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:28:18.010 22:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:18.010 22:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:28:18.010 22:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:18.010 22:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:18.010 rmmod nvme_tcp 00:28:18.010 rmmod nvme_fabrics 00:28:18.010 rmmod nvme_keyring 00:28:18.010 22:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:18.010 22:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:28:18.010 22:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:28:18.010 22:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:18.010 22:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:18.010 22:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:18.010 22:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:18.010 22:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:18.010 22:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:18.010 22:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:18.010 22:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:18.010 22:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.913 22:58:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:19.913 00:28:19.913 real 0m7.671s 00:28:19.913 user 0m19.012s 00:28:19.913 sys 0m1.535s 00:28:19.913 22:58:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:19.913 22:58:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:19.913 ************************************ 00:28:19.913 END TEST nvmf_shutdown_tc3 00:28:19.913 ************************************ 00:28:19.913 22:58:12 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:28:19.913 00:28:19.913 real 0m26.955s 00:28:19.913 user 1m14.888s 00:28:19.913 sys 0m6.287s 00:28:19.914 22:58:12 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:19.914 22:58:12 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:19.914 ************************************ 00:28:19.914 END TEST nvmf_shutdown 00:28:19.914 ************************************ 00:28:19.914 22:58:12 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:28:19.914 22:58:12 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:19.914 22:58:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:19.914 22:58:12 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:28:19.914 22:58:12 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:19.914 22:58:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:19.914 22:58:12 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:28:19.914 22:58:12 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:19.914 22:58:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:19.914 22:58:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:19.914 22:58:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:19.914 ************************************ 00:28:19.914 START TEST nvmf_multicontroller 00:28:19.914 ************************************ 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:19.914 * Looking for test storage... 00:28:19.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:28:19.914 22:58:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.906 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:21.906 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:28:21.906 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:21.906 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:21.906 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:21.906 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:21.906 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:21.906 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:28:21.906 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:21.907 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:21.907 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:21.907 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:21.907 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:21.907 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:22.165 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:22.165 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:22.165 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:22.165 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:22.165 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:22.165 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:22.165 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:22.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:22.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:28:22.165 00:28:22.165 --- 10.0.0.2 ping statistics --- 00:28:22.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.165 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:28:22.165 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:22.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:22.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:28:22.165 00:28:22.165 --- 10.0.0.1 ping statistics --- 00:28:22.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.165 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:28:22.165 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:22.165 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:28:22.165 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:22.165 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:22.165 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:22.165 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:22.165 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:22.165 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:22.165 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:22.165 22:58:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:28:22.165 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:22.165 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:22.165 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.165 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3630631 00:28:22.165 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:22.165 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3630631 00:28:22.165 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 3630631 ']' 00:28:22.165 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.165 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:22.165 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.166 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:22.166 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.166 [2024-07-26 22:58:14.567504] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:28:22.166 [2024-07-26 22:58:14.567589] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:22.166 EAL: No free 2048 kB hugepages reported on node 1 00:28:22.166 [2024-07-26 22:58:14.641228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:22.424 [2024-07-26 22:58:14.731426] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:22.424 [2024-07-26 22:58:14.731490] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:22.424 [2024-07-26 22:58:14.731507] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:22.424 [2024-07-26 22:58:14.731521] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:22.424 [2024-07-26 22:58:14.731534] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:22.424 [2024-07-26 22:58:14.731622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:22.424 [2024-07-26 22:58:14.731735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:22.424 [2024-07-26 22:58:14.731739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.424 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:22.424 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:28:22.424 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:22.424 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:22.424 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.424 22:58:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:22.424 22:58:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:22.424 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.424 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.424 [2024-07-26 22:58:14.862759] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:22.424 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.424 22:58:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:22.424 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.424 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.424 Malloc0 00:28:22.424 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.424 22:58:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:22.424 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.424 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.424 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.424 22:58:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:22.424 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.424 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.424 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.424 22:58:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:22.424 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.424 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.424 [2024-07-26 22:58:14.925785] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:22.682 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.682 22:58:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:22.682 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.682 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.682 [2024-07-26 22:58:14.933686] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:22.682 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.682 22:58:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:22.682 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.682 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.682 Malloc1 00:28:22.682 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.682 22:58:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:22.682 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.682 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.682 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.682 22:58:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:28:22.682 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.682 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.682 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.682 22:58:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:22.682 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.682 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.682 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.682 22:58:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:28:22.682 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.682 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.682 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.682 22:58:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3630657 00:28:22.682 22:58:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:28:22.682 22:58:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:22.682 22:58:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3630657 /var/tmp/bdevperf.sock 00:28:22.682 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 3630657 ']' 00:28:22.682 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:22.682 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:22.682 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:22.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:22.682 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:22.682 22:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.941 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:22.941 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:28:22.941 22:58:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:22.941 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.941 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:23.202 NVMe0n1 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.202 1 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:23.202 request: 00:28:23.202 { 00:28:23.202 "name": "NVMe0", 00:28:23.202 "trtype": "tcp", 00:28:23.202 "traddr": "10.0.0.2", 00:28:23.202 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:28:23.202 "hostaddr": "10.0.0.2", 00:28:23.202 "hostsvcid": "60000", 00:28:23.202 "adrfam": "ipv4", 00:28:23.202 "trsvcid": "4420", 00:28:23.202 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:23.202 "method": "bdev_nvme_attach_controller", 00:28:23.202 "req_id": 1 00:28:23.202 } 00:28:23.202 Got JSON-RPC error response 00:28:23.202 response: 00:28:23.202 { 00:28:23.202 "code": -114, 00:28:23.202 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:23.202 } 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:23.202 request: 00:28:23.202 { 00:28:23.202 "name": "NVMe0", 00:28:23.202 "trtype": "tcp", 00:28:23.202 "traddr": "10.0.0.2", 00:28:23.202 "hostaddr": "10.0.0.2", 00:28:23.202 "hostsvcid": "60000", 00:28:23.202 "adrfam": "ipv4", 00:28:23.202 "trsvcid": "4420", 00:28:23.202 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:23.202 "method": "bdev_nvme_attach_controller", 00:28:23.202 "req_id": 1 00:28:23.202 } 00:28:23.202 Got JSON-RPC error response 00:28:23.202 response: 00:28:23.202 { 00:28:23.202 "code": -114, 00:28:23.202 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:23.202 } 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:23.202 request: 00:28:23.202 { 00:28:23.202 "name": "NVMe0", 00:28:23.202 "trtype": "tcp", 00:28:23.202 "traddr": "10.0.0.2", 00:28:23.202 "hostaddr": "10.0.0.2", 00:28:23.202 "hostsvcid": "60000", 00:28:23.202 "adrfam": "ipv4", 00:28:23.202 "trsvcid": "4420", 00:28:23.202 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:23.202 "multipath": "disable", 00:28:23.202 "method": "bdev_nvme_attach_controller", 00:28:23.202 "req_id": 1 00:28:23.202 } 00:28:23.202 Got JSON-RPC error response 00:28:23.202 response: 00:28:23.202 { 00:28:23.202 "code": -114, 00:28:23.202 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:28:23.202 } 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.202 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:23.202 request: 00:28:23.202 { 00:28:23.202 "name": "NVMe0", 00:28:23.202 "trtype": "tcp", 00:28:23.202 "traddr": "10.0.0.2", 00:28:23.202 "hostaddr": "10.0.0.2", 00:28:23.202 "hostsvcid": "60000", 00:28:23.202 "adrfam": "ipv4", 00:28:23.202 "trsvcid": "4420", 00:28:23.202 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:23.202 "multipath": "failover", 00:28:23.202 "method": "bdev_nvme_attach_controller", 00:28:23.202 "req_id": 1 00:28:23.202 } 00:28:23.202 Got JSON-RPC error response 00:28:23.202 response: 00:28:23.202 { 00:28:23.202 "code": -114, 00:28:23.202 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:23.203 } 00:28:23.203 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:23.203 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:23.203 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:23.203 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:23.203 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:23.203 22:58:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:23.203 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.203 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:23.461 00:28:23.461 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.461 22:58:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:23.461 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.461 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:23.461 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.461 22:58:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:23.461 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.461 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:23.461 00:28:23.461 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.461 22:58:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:23.461 22:58:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:28:23.461 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.461 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:23.461 22:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.461 22:58:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:28:23.461 22:58:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:24.834 0 00:28:24.834 22:58:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:24.834 22:58:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.834 22:58:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:24.834 22:58:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.834 22:58:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3630657 00:28:24.834 22:58:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 3630657 ']' 00:28:24.834 22:58:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 3630657 00:28:24.834 22:58:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:28:24.834 22:58:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:24.834 22:58:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3630657 00:28:24.834 22:58:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:24.834 22:58:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:24.834 22:58:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3630657' 00:28:24.834 killing process with pid 3630657 00:28:24.834 22:58:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 3630657 00:28:24.834 22:58:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 3630657 00:28:24.834 22:58:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:24.834 22:58:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.834 22:58:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:24.834 22:58:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.834 22:58:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:24.834 22:58:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.834 22:58:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:24.834 22:58:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.835 22:58:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:28:24.835 22:58:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:24.835 22:58:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:28:24.835 22:58:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:24.835 22:58:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:28:24.835 22:58:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:28:24.835 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:24.835 [2024-07-26 22:58:15.030758] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:28:24.835 [2024-07-26 22:58:15.030860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3630657 ] 00:28:24.835 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.835 [2024-07-26 22:58:15.092177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.835 [2024-07-26 22:58:15.181235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.835 [2024-07-26 22:58:15.812279] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name 79a5b6e7-c5ed-4f3a-a9ab-b12593638961 already exists 00:28:24.835 [2024-07-26 22:58:15.812322] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:79a5b6e7-c5ed-4f3a-a9ab-b12593638961 alias for bdev NVMe1n1 00:28:24.835 [2024-07-26 22:58:15.812341] bdev_nvme.c:4314:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:24.835 Running I/O for 1 seconds... 00:28:24.835 00:28:24.835 Latency(us) 00:28:24.835 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.835 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:24.835 NVMe0n1 : 1.01 18190.36 71.06 0.00 0.00 7018.05 4344.79 17864.63 00:28:24.835 =================================================================================================================== 00:28:24.835 Total : 18190.36 71.06 0.00 0.00 7018.05 4344.79 17864.63 00:28:24.835 Received shutdown signal, test time was about 1.000000 seconds 00:28:24.835 00:28:24.835 Latency(us) 00:28:24.835 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.835 =================================================================================================================== 00:28:24.835 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:24.835 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:24.835 22:58:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:24.835 22:58:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:28:24.835 22:58:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:28:24.835 22:58:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:24.835 22:58:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:28:24.835 22:58:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:24.835 22:58:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:28:24.835 22:58:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:24.835 22:58:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:24.835 rmmod nvme_tcp 00:28:24.835 rmmod nvme_fabrics 00:28:24.835 rmmod nvme_keyring 00:28:24.835 22:58:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:24.835 22:58:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:28:24.835 22:58:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:28:24.835 22:58:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3630631 ']' 00:28:24.835 22:58:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3630631 00:28:24.835 22:58:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 3630631 ']' 00:28:24.835 22:58:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 3630631 00:28:24.835 22:58:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:28:24.835 22:58:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:24.835 22:58:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3630631 00:28:24.835 22:58:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:24.835 22:58:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:24.835 22:58:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3630631' 00:28:24.835 killing process with pid 3630631 00:28:24.835 22:58:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 3630631 00:28:24.835 22:58:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 3630631 00:28:25.405 22:58:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:25.405 22:58:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:25.405 22:58:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:25.405 22:58:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:25.405 22:58:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:25.405 22:58:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.405 22:58:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:25.405 22:58:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.309 22:58:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:27.309 00:28:27.309 real 0m7.367s 00:28:27.309 user 0m11.364s 00:28:27.309 sys 0m2.353s 00:28:27.309 22:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:27.309 22:58:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:27.309 ************************************ 00:28:27.309 END TEST nvmf_multicontroller 00:28:27.309 ************************************ 00:28:27.309 22:58:19 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:27.309 22:58:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:27.309 22:58:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:27.309 22:58:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:27.309 ************************************ 00:28:27.309 START TEST nvmf_aer 00:28:27.309 ************************************ 00:28:27.309 22:58:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:27.309 * Looking for test storage... 00:28:27.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:27.309 22:58:19 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:27.309 22:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:27.309 22:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:28:27.310 22:58:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:29.846 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:29.846 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:29.846 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.846 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:29.846 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:29.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:29.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:28:29.847 00:28:29.847 --- 10.0.0.2 ping statistics --- 00:28:29.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:29.847 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:29.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:29.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:28:29.847 00:28:29.847 --- 10.0.0.1 ping statistics --- 00:28:29.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:29.847 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3632880 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3632880 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 3632880 ']' 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:29.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:29.847 22:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:29.847 [2024-07-26 22:58:21.966204] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:28:29.847 [2024-07-26 22:58:21.966277] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:29.847 EAL: No free 2048 kB hugepages reported on node 1 00:28:29.847 [2024-07-26 22:58:22.033312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:29.847 [2024-07-26 22:58:22.122373] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:29.847 [2024-07-26 22:58:22.122437] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:29.847 [2024-07-26 22:58:22.122450] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:29.847 [2024-07-26 22:58:22.122462] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:29.847 [2024-07-26 22:58:22.122472] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:29.847 [2024-07-26 22:58:22.122523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:29.847 [2024-07-26 22:58:22.122584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:29.847 [2024-07-26 22:58:22.122656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:29.847 [2024-07-26 22:58:22.122658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.847 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:29.847 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:28:29.847 22:58:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:29.847 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:29.847 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:29.847 22:58:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:29.847 22:58:22 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:29.847 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.847 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:29.847 [2024-07-26 22:58:22.277931] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:29.847 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.847 22:58:22 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:29.847 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.847 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:29.847 Malloc0 00:28:29.847 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.847 22:58:22 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:29.847 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.847 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:29.847 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.847 22:58:22 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:29.847 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.847 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:29.847 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.847 22:58:22 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:29.847 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.847 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:29.847 [2024-07-26 22:58:22.331680] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:29.847 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.847 22:58:22 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:29.847 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.847 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:29.847 [ 00:28:29.847 { 00:28:29.847 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:29.847 "subtype": "Discovery", 00:28:29.847 "listen_addresses": [], 00:28:29.847 "allow_any_host": true, 00:28:29.847 "hosts": [] 00:28:29.847 }, 00:28:29.847 { 00:28:29.847 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:29.847 "subtype": "NVMe", 00:28:29.847 "listen_addresses": [ 00:28:29.847 { 00:28:29.847 "trtype": "TCP", 00:28:29.847 "adrfam": "IPv4", 00:28:29.847 "traddr": "10.0.0.2", 00:28:29.847 "trsvcid": "4420" 00:28:29.847 } 00:28:29.847 ], 00:28:29.847 "allow_any_host": true, 00:28:29.847 "hosts": [], 00:28:29.847 "serial_number": "SPDK00000000000001", 00:28:29.847 "model_number": "SPDK bdev Controller", 00:28:29.847 "max_namespaces": 2, 00:28:29.847 "min_cntlid": 1, 00:28:29.847 "max_cntlid": 65519, 00:28:29.847 "namespaces": [ 00:28:29.847 { 00:28:29.847 "nsid": 1, 00:28:29.847 "bdev_name": "Malloc0", 00:28:29.847 "name": "Malloc0", 00:28:29.847 "nguid": "5638D5C2228248E79FA4F46A9C4136AD", 00:28:29.847 "uuid": "5638d5c2-2282-48e7-9fa4-f46a9c4136ad" 00:28:29.847 } 00:28:29.847 ] 00:28:29.847 } 00:28:29.847 ] 00:28:29.848 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.848 22:58:22 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:29.848 22:58:22 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:30.105 22:58:22 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=3633009 00:28:30.105 22:58:22 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:30.105 22:58:22 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:30.106 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:28:30.106 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:30.106 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:28:30.106 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:28:30.106 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:30.106 EAL: No free 2048 kB hugepages reported on node 1 00:28:30.106 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:30.106 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:28:30.106 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:28:30.106 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:30.106 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:30.106 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:30.106 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:28:30.106 22:58:22 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:30.106 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.106 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:30.106 Malloc1 00:28:30.106 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.106 22:58:22 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:30.106 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.106 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:30.364 Asynchronous Event Request test 00:28:30.364 Attaching to 10.0.0.2 00:28:30.364 Attached to 10.0.0.2 00:28:30.364 Registering asynchronous event callbacks... 00:28:30.364 Starting namespace attribute notice tests for all controllers... 00:28:30.364 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:30.364 aer_cb - Changed Namespace 00:28:30.364 Cleaning up... 00:28:30.364 [ 00:28:30.364 { 00:28:30.364 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:30.364 "subtype": "Discovery", 00:28:30.364 "listen_addresses": [], 00:28:30.364 "allow_any_host": true, 00:28:30.364 "hosts": [] 00:28:30.364 }, 00:28:30.364 { 00:28:30.364 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:30.364 "subtype": "NVMe", 00:28:30.364 "listen_addresses": [ 00:28:30.364 { 00:28:30.364 "trtype": "TCP", 00:28:30.364 "adrfam": "IPv4", 00:28:30.364 "traddr": "10.0.0.2", 00:28:30.364 "trsvcid": "4420" 00:28:30.364 } 00:28:30.364 ], 00:28:30.364 "allow_any_host": true, 00:28:30.364 "hosts": [], 00:28:30.364 "serial_number": "SPDK00000000000001", 00:28:30.364 "model_number": "SPDK bdev Controller", 00:28:30.364 "max_namespaces": 2, 00:28:30.364 "min_cntlid": 1, 00:28:30.364 "max_cntlid": 65519, 00:28:30.364 "namespaces": [ 00:28:30.364 { 00:28:30.364 "nsid": 1, 00:28:30.364 "bdev_name": "Malloc0", 00:28:30.364 "name": "Malloc0", 00:28:30.364 "nguid": "5638D5C2228248E79FA4F46A9C4136AD", 00:28:30.364 "uuid": "5638d5c2-2282-48e7-9fa4-f46a9c4136ad" 00:28:30.364 }, 00:28:30.364 { 00:28:30.364 "nsid": 2, 00:28:30.364 "bdev_name": "Malloc1", 00:28:30.364 "name": "Malloc1", 00:28:30.364 "nguid": "B1DFB6113B7240E8A4316FE0745C9BBB", 00:28:30.364 "uuid": "b1dfb611-3b72-40e8-a431-6fe0745c9bbb" 00:28:30.364 } 00:28:30.364 ] 00:28:30.364 } 00:28:30.364 ] 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 3633009 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:30.364 rmmod nvme_tcp 00:28:30.364 rmmod nvme_fabrics 00:28:30.364 rmmod nvme_keyring 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3632880 ']' 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3632880 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 3632880 ']' 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 3632880 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3632880 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3632880' 00:28:30.364 killing process with pid 3632880 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 3632880 00:28:30.364 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 3632880 00:28:30.622 22:58:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:30.622 22:58:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:30.622 22:58:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:30.622 22:58:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:30.622 22:58:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:30.622 22:58:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.622 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:30.622 22:58:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.524 22:58:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:32.524 00:28:32.524 real 0m5.311s 00:28:32.524 user 0m4.075s 00:28:32.524 sys 0m1.872s 00:28:32.524 22:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:32.524 22:58:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:32.524 ************************************ 00:28:32.524 END TEST nvmf_aer 00:28:32.524 ************************************ 00:28:32.783 22:58:25 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:32.784 22:58:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:32.784 22:58:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:32.784 22:58:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:32.784 ************************************ 00:28:32.784 START TEST nvmf_async_init 00:28:32.784 ************************************ 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:32.784 * Looking for test storage... 00:28:32.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=24f83f2ab1d94c37a6e5bfe44099df2a 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:28:32.784 22:58:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.689 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:34.689 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:28:34.689 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:34.689 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:34.689 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:34.689 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:34.689 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:34.689 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:28:34.689 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:34.689 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:28:34.689 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:28:34.689 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:28:34.689 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:28:34.689 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:28:34.689 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:28:34.689 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:34.689 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:34.689 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:34.689 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:34.690 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:34.690 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:34.690 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:34.690 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:34.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:34.690 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:28:34.690 00:28:34.690 --- 10.0.0.2 ping statistics --- 00:28:34.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.690 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:34.690 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:34.690 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:28:34.690 00:28:34.690 --- 10.0.0.1 ping statistics --- 00:28:34.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.690 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3634938 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3634938 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 3634938 ']' 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:34.690 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.949 [2024-07-26 22:58:27.209160] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:28:34.949 [2024-07-26 22:58:27.209244] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:34.949 EAL: No free 2048 kB hugepages reported on node 1 00:28:34.949 [2024-07-26 22:58:27.271884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.949 [2024-07-26 22:58:27.358009] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:34.949 [2024-07-26 22:58:27.358084] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:34.949 [2024-07-26 22:58:27.358110] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:34.949 [2024-07-26 22:58:27.358121] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:34.949 [2024-07-26 22:58:27.358132] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:34.949 [2024-07-26 22:58:27.358158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.208 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:35.208 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:28:35.208 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:35.208 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:35.208 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:35.208 22:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:35.208 22:58:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:35.208 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.208 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:35.208 [2024-07-26 22:58:27.488953] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:35.208 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.208 22:58:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:35.208 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.208 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:35.208 null0 00:28:35.208 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.208 22:58:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:35.208 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.208 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:35.208 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.208 22:58:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:35.208 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.208 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:35.208 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.208 22:58:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 24f83f2ab1d94c37a6e5bfe44099df2a 00:28:35.208 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.208 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:35.208 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.208 22:58:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:35.208 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.208 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:35.208 [2024-07-26 22:58:27.529238] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:35.208 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.208 22:58:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:35.208 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.208 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:35.468 nvme0n1 00:28:35.468 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.468 22:58:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:35.468 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.468 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:35.468 [ 00:28:35.468 { 00:28:35.468 "name": "nvme0n1", 00:28:35.468 "aliases": [ 00:28:35.468 "24f83f2a-b1d9-4c37-a6e5-bfe44099df2a" 00:28:35.468 ], 00:28:35.468 "product_name": "NVMe disk", 00:28:35.468 "block_size": 512, 00:28:35.468 "num_blocks": 2097152, 00:28:35.468 "uuid": "24f83f2a-b1d9-4c37-a6e5-bfe44099df2a", 00:28:35.468 "assigned_rate_limits": { 00:28:35.468 "rw_ios_per_sec": 0, 00:28:35.468 "rw_mbytes_per_sec": 0, 00:28:35.468 "r_mbytes_per_sec": 0, 00:28:35.468 "w_mbytes_per_sec": 0 00:28:35.468 }, 00:28:35.468 "claimed": false, 00:28:35.468 "zoned": false, 00:28:35.468 "supported_io_types": { 00:28:35.468 "read": true, 00:28:35.468 "write": true, 00:28:35.468 "unmap": false, 00:28:35.468 "write_zeroes": true, 00:28:35.468 "flush": true, 00:28:35.468 "reset": true, 00:28:35.468 "compare": true, 00:28:35.468 "compare_and_write": true, 00:28:35.468 "abort": true, 00:28:35.468 "nvme_admin": true, 00:28:35.468 "nvme_io": true 00:28:35.468 }, 00:28:35.468 "memory_domains": [ 00:28:35.468 { 00:28:35.468 "dma_device_id": "system", 00:28:35.468 "dma_device_type": 1 00:28:35.468 } 00:28:35.468 ], 00:28:35.468 "driver_specific": { 00:28:35.468 "nvme": [ 00:28:35.468 { 00:28:35.468 "trid": { 00:28:35.468 "trtype": "TCP", 00:28:35.468 "adrfam": "IPv4", 00:28:35.468 "traddr": "10.0.0.2", 00:28:35.468 "trsvcid": "4420", 00:28:35.468 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:35.468 }, 00:28:35.468 "ctrlr_data": { 00:28:35.468 "cntlid": 1, 00:28:35.468 "vendor_id": "0x8086", 00:28:35.468 "model_number": "SPDK bdev Controller", 00:28:35.468 "serial_number": "00000000000000000000", 00:28:35.468 "firmware_revision": "24.05.1", 00:28:35.468 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:35.468 "oacs": { 00:28:35.468 "security": 0, 00:28:35.468 "format": 0, 00:28:35.468 "firmware": 0, 00:28:35.468 "ns_manage": 0 00:28:35.468 }, 00:28:35.468 "multi_ctrlr": true, 00:28:35.468 "ana_reporting": false 00:28:35.468 }, 00:28:35.468 "vs": { 00:28:35.468 "nvme_version": "1.3" 00:28:35.468 }, 00:28:35.468 "ns_data": { 00:28:35.468 "id": 1, 00:28:35.468 "can_share": true 00:28:35.468 } 00:28:35.468 } 00:28:35.468 ], 00:28:35.468 "mp_policy": "active_passive" 00:28:35.468 } 00:28:35.468 } 00:28:35.468 ] 00:28:35.468 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.468 22:58:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:35.468 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.468 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:35.468 [2024-07-26 22:58:27.781862] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:35.468 [2024-07-26 22:58:27.781959] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1072b90 (9): Bad file descriptor 00:28:35.468 [2024-07-26 22:58:27.924210] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:35.468 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.468 22:58:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:35.468 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.468 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:35.468 [ 00:28:35.468 { 00:28:35.468 "name": "nvme0n1", 00:28:35.468 "aliases": [ 00:28:35.468 "24f83f2a-b1d9-4c37-a6e5-bfe44099df2a" 00:28:35.468 ], 00:28:35.468 "product_name": "NVMe disk", 00:28:35.468 "block_size": 512, 00:28:35.468 "num_blocks": 2097152, 00:28:35.468 "uuid": "24f83f2a-b1d9-4c37-a6e5-bfe44099df2a", 00:28:35.468 "assigned_rate_limits": { 00:28:35.468 "rw_ios_per_sec": 0, 00:28:35.468 "rw_mbytes_per_sec": 0, 00:28:35.468 "r_mbytes_per_sec": 0, 00:28:35.468 "w_mbytes_per_sec": 0 00:28:35.468 }, 00:28:35.468 "claimed": false, 00:28:35.468 "zoned": false, 00:28:35.468 "supported_io_types": { 00:28:35.468 "read": true, 00:28:35.468 "write": true, 00:28:35.468 "unmap": false, 00:28:35.468 "write_zeroes": true, 00:28:35.468 "flush": true, 00:28:35.468 "reset": true, 00:28:35.468 "compare": true, 00:28:35.468 "compare_and_write": true, 00:28:35.468 "abort": true, 00:28:35.468 "nvme_admin": true, 00:28:35.468 "nvme_io": true 00:28:35.468 }, 00:28:35.468 "memory_domains": [ 00:28:35.468 { 00:28:35.468 "dma_device_id": "system", 00:28:35.468 "dma_device_type": 1 00:28:35.468 } 00:28:35.468 ], 00:28:35.468 "driver_specific": { 00:28:35.468 "nvme": [ 00:28:35.468 { 00:28:35.468 "trid": { 00:28:35.468 "trtype": "TCP", 00:28:35.468 "adrfam": "IPv4", 00:28:35.468 "traddr": "10.0.0.2", 00:28:35.468 "trsvcid": "4420", 00:28:35.468 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:35.468 }, 00:28:35.468 "ctrlr_data": { 00:28:35.468 "cntlid": 2, 00:28:35.468 "vendor_id": "0x8086", 00:28:35.468 "model_number": "SPDK bdev Controller", 00:28:35.468 "serial_number": "00000000000000000000", 00:28:35.468 "firmware_revision": "24.05.1", 00:28:35.468 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:35.468 "oacs": { 00:28:35.468 "security": 0, 00:28:35.468 "format": 0, 00:28:35.468 "firmware": 0, 00:28:35.468 "ns_manage": 0 00:28:35.468 }, 00:28:35.468 "multi_ctrlr": true, 00:28:35.468 "ana_reporting": false 00:28:35.468 }, 00:28:35.468 "vs": { 00:28:35.468 "nvme_version": "1.3" 00:28:35.468 }, 00:28:35.468 "ns_data": { 00:28:35.468 "id": 1, 00:28:35.468 "can_share": true 00:28:35.468 } 00:28:35.468 } 00:28:35.468 ], 00:28:35.468 "mp_policy": "active_passive" 00:28:35.468 } 00:28:35.468 } 00:28:35.469 ] 00:28:35.469 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.469 22:58:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.469 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.469 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:35.469 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.469 22:58:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:35.469 22:58:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.EI7EnWAhf5 00:28:35.469 22:58:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:35.469 22:58:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.EI7EnWAhf5 00:28:35.469 22:58:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:35.469 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.469 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:35.729 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.729 22:58:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:35.729 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.729 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:35.729 [2024-07-26 22:58:27.974509] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:35.729 [2024-07-26 22:58:27.974639] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:35.729 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.729 22:58:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.EI7EnWAhf5 00:28:35.729 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.729 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:35.729 [2024-07-26 22:58:27.982529] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:35.729 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.729 22:58:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.EI7EnWAhf5 00:28:35.729 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.729 22:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:35.729 [2024-07-26 22:58:27.990542] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:35.729 [2024-07-26 22:58:27.990607] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:35.729 nvme0n1 00:28:35.729 22:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.729 22:58:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:35.729 22:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.729 22:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:35.729 [ 00:28:35.729 { 00:28:35.729 "name": "nvme0n1", 00:28:35.729 "aliases": [ 00:28:35.729 "24f83f2a-b1d9-4c37-a6e5-bfe44099df2a" 00:28:35.729 ], 00:28:35.729 "product_name": "NVMe disk", 00:28:35.729 "block_size": 512, 00:28:35.729 "num_blocks": 2097152, 00:28:35.729 "uuid": "24f83f2a-b1d9-4c37-a6e5-bfe44099df2a", 00:28:35.729 "assigned_rate_limits": { 00:28:35.729 "rw_ios_per_sec": 0, 00:28:35.729 "rw_mbytes_per_sec": 0, 00:28:35.729 "r_mbytes_per_sec": 0, 00:28:35.729 "w_mbytes_per_sec": 0 00:28:35.729 }, 00:28:35.729 "claimed": false, 00:28:35.729 "zoned": false, 00:28:35.729 "supported_io_types": { 00:28:35.729 "read": true, 00:28:35.729 "write": true, 00:28:35.729 "unmap": false, 00:28:35.729 "write_zeroes": true, 00:28:35.729 "flush": true, 00:28:35.729 "reset": true, 00:28:35.729 "compare": true, 00:28:35.729 "compare_and_write": true, 00:28:35.729 "abort": true, 00:28:35.729 "nvme_admin": true, 00:28:35.729 "nvme_io": true 00:28:35.729 }, 00:28:35.729 "memory_domains": [ 00:28:35.729 { 00:28:35.729 "dma_device_id": "system", 00:28:35.729 "dma_device_type": 1 00:28:35.729 } 00:28:35.729 ], 00:28:35.729 "driver_specific": { 00:28:35.729 "nvme": [ 00:28:35.729 { 00:28:35.729 "trid": { 00:28:35.729 "trtype": "TCP", 00:28:35.729 "adrfam": "IPv4", 00:28:35.729 "traddr": "10.0.0.2", 00:28:35.729 "trsvcid": "4421", 00:28:35.729 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:35.729 }, 00:28:35.729 "ctrlr_data": { 00:28:35.729 "cntlid": 3, 00:28:35.729 "vendor_id": "0x8086", 00:28:35.729 "model_number": "SPDK bdev Controller", 00:28:35.729 "serial_number": "00000000000000000000", 00:28:35.729 "firmware_revision": "24.05.1", 00:28:35.729 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:35.729 "oacs": { 00:28:35.729 "security": 0, 00:28:35.729 "format": 0, 00:28:35.729 "firmware": 0, 00:28:35.729 "ns_manage": 0 00:28:35.729 }, 00:28:35.729 "multi_ctrlr": true, 00:28:35.729 "ana_reporting": false 00:28:35.729 }, 00:28:35.729 "vs": { 00:28:35.729 "nvme_version": "1.3" 00:28:35.729 }, 00:28:35.729 "ns_data": { 00:28:35.730 "id": 1, 00:28:35.730 "can_share": true 00:28:35.730 } 00:28:35.730 } 00:28:35.730 ], 00:28:35.730 "mp_policy": "active_passive" 00:28:35.730 } 00:28:35.730 } 00:28:35.730 ] 00:28:35.730 22:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.730 22:58:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.730 22:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.730 22:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:35.730 22:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.730 22:58:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.EI7EnWAhf5 00:28:35.730 22:58:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:35.730 22:58:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:28:35.730 22:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:35.730 22:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:28:35.730 22:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:35.730 22:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:28:35.730 22:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:35.730 22:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:35.730 rmmod nvme_tcp 00:28:35.730 rmmod nvme_fabrics 00:28:35.730 rmmod nvme_keyring 00:28:35.730 22:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:35.730 22:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:28:35.730 22:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:28:35.730 22:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3634938 ']' 00:28:35.730 22:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3634938 00:28:35.730 22:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 3634938 ']' 00:28:35.730 22:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 3634938 00:28:35.730 22:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:28:35.730 22:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:35.730 22:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3634938 00:28:35.730 22:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:35.730 22:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:35.730 22:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3634938' 00:28:35.730 killing process with pid 3634938 00:28:35.730 22:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 3634938 00:28:35.730 [2024-07-26 22:58:28.192338] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:35.730 [2024-07-26 22:58:28.192393] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:35.730 22:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 3634938 00:28:35.989 22:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:35.989 22:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:35.989 22:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:35.989 22:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:35.989 22:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:35.989 22:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:35.989 22:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:35.989 22:58:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.524 22:58:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:38.524 00:28:38.524 real 0m5.382s 00:28:38.524 user 0m2.044s 00:28:38.524 sys 0m1.713s 00:28:38.524 22:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:38.524 22:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:38.524 ************************************ 00:28:38.524 END TEST nvmf_async_init 00:28:38.524 ************************************ 00:28:38.524 22:58:30 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:38.524 22:58:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:38.524 22:58:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:38.524 22:58:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:38.524 ************************************ 00:28:38.524 START TEST dma 00:28:38.524 ************************************ 00:28:38.524 22:58:30 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:38.524 * Looking for test storage... 00:28:38.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:38.524 22:58:30 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:38.524 22:58:30 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:28:38.524 22:58:30 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:38.524 22:58:30 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:38.524 22:58:30 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:38.524 22:58:30 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:38.524 22:58:30 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:38.524 22:58:30 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:38.524 22:58:30 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:38.524 22:58:30 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:38.524 22:58:30 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:38.524 22:58:30 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:38.524 22:58:30 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:38.524 22:58:30 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:38.524 22:58:30 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:38.524 22:58:30 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:38.524 22:58:30 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:38.524 22:58:30 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:38.524 22:58:30 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:38.524 22:58:30 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:38.524 22:58:30 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:38.524 22:58:30 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:38.524 22:58:30 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.524 22:58:30 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.524 22:58:30 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.524 22:58:30 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:28:38.524 22:58:30 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.524 22:58:30 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:28:38.524 22:58:30 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:38.524 22:58:30 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:38.524 22:58:30 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:38.524 22:58:30 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:38.524 22:58:30 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:38.524 22:58:30 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:38.524 22:58:30 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:38.524 22:58:30 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:38.524 22:58:30 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:38.524 22:58:30 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:28:38.524 00:28:38.524 real 0m0.066s 00:28:38.524 user 0m0.028s 00:28:38.524 sys 0m0.043s 00:28:38.524 22:58:30 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:38.524 22:58:30 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:28:38.524 ************************************ 00:28:38.524 END TEST dma 00:28:38.524 ************************************ 00:28:38.524 22:58:30 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:38.524 22:58:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:38.524 22:58:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:38.524 22:58:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:38.524 ************************************ 00:28:38.524 START TEST nvmf_identify 00:28:38.524 ************************************ 00:28:38.524 22:58:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:38.524 * Looking for test storage... 00:28:38.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:38.524 22:58:30 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:28:38.525 22:58:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:40.427 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:40.427 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:40.427 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:40.427 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:40.427 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:40.428 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:40.428 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:40.428 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:40.428 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:40.428 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:40.428 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:40.428 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:40.428 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:40.428 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:40.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:40.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:28:40.428 00:28:40.428 --- 10.0.0.2 ping statistics --- 00:28:40.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:40.428 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:28:40.428 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:40.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:40.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:28:40.428 00:28:40.428 --- 10.0.0.1 ping statistics --- 00:28:40.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:40.428 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:28:40.428 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:40.428 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:28:40.428 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:40.428 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:40.428 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:40.428 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:40.428 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:40.428 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:40.428 22:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:40.428 22:58:32 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:40.428 22:58:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:40.428 22:58:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:40.428 22:58:32 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3637061 00:28:40.428 22:58:32 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:40.428 22:58:32 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:40.428 22:58:32 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3637061 00:28:40.428 22:58:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 3637061 ']' 00:28:40.428 22:58:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:40.428 22:58:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:40.428 22:58:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:40.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:40.428 22:58:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:40.428 22:58:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:40.428 [2024-07-26 22:58:32.840174] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:28:40.428 [2024-07-26 22:58:32.840257] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:40.428 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.428 [2024-07-26 22:58:32.906648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:40.686 [2024-07-26 22:58:32.993898] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:40.686 [2024-07-26 22:58:32.993952] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:40.686 [2024-07-26 22:58:32.993980] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:40.686 [2024-07-26 22:58:32.993990] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:40.686 [2024-07-26 22:58:32.994000] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:40.686 [2024-07-26 22:58:32.994091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.686 [2024-07-26 22:58:32.994155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:40.686 [2024-07-26 22:58:32.994220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:40.686 [2024-07-26 22:58:32.994223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.686 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:40.686 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:28:40.686 22:58:33 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:40.686 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.686 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:40.686 [2024-07-26 22:58:33.116556] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:40.686 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.686 22:58:33 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:40.686 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:40.686 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:40.686 22:58:33 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:40.686 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.686 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:40.686 Malloc0 00:28:40.686 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.686 22:58:33 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:40.686 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.686 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:40.686 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.686 22:58:33 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:40.686 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.686 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:40.686 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.686 22:58:33 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:40.686 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.686 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:40.686 [2024-07-26 22:58:33.187648] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:40.953 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.953 22:58:33 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:40.953 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.953 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:40.953 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.953 22:58:33 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:40.953 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.953 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:40.953 [ 00:28:40.953 { 00:28:40.953 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:40.953 "subtype": "Discovery", 00:28:40.953 "listen_addresses": [ 00:28:40.953 { 00:28:40.953 "trtype": "TCP", 00:28:40.953 "adrfam": "IPv4", 00:28:40.953 "traddr": "10.0.0.2", 00:28:40.953 "trsvcid": "4420" 00:28:40.953 } 00:28:40.953 ], 00:28:40.953 "allow_any_host": true, 00:28:40.953 "hosts": [] 00:28:40.953 }, 00:28:40.953 { 00:28:40.953 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:40.953 "subtype": "NVMe", 00:28:40.953 "listen_addresses": [ 00:28:40.953 { 00:28:40.953 "trtype": "TCP", 00:28:40.953 "adrfam": "IPv4", 00:28:40.954 "traddr": "10.0.0.2", 00:28:40.954 "trsvcid": "4420" 00:28:40.954 } 00:28:40.954 ], 00:28:40.954 "allow_any_host": true, 00:28:40.954 "hosts": [], 00:28:40.954 "serial_number": "SPDK00000000000001", 00:28:40.954 "model_number": "SPDK bdev Controller", 00:28:40.954 "max_namespaces": 32, 00:28:40.954 "min_cntlid": 1, 00:28:40.954 "max_cntlid": 65519, 00:28:40.954 "namespaces": [ 00:28:40.954 { 00:28:40.954 "nsid": 1, 00:28:40.954 "bdev_name": "Malloc0", 00:28:40.954 "name": "Malloc0", 00:28:40.954 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:40.954 "eui64": "ABCDEF0123456789", 00:28:40.954 "uuid": "df1a898d-18ad-4c93-b37f-442cea631552" 00:28:40.954 } 00:28:40.954 ] 00:28:40.954 } 00:28:40.954 ] 00:28:40.954 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.954 22:58:33 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:40.954 [2024-07-26 22:58:33.224475] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:28:40.954 [2024-07-26 22:58:33.224516] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3637090 ] 00:28:40.954 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.954 [2024-07-26 22:58:33.258280] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:40.954 [2024-07-26 22:58:33.258362] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:40.954 [2024-07-26 22:58:33.258373] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:40.954 [2024-07-26 22:58:33.258388] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:40.954 [2024-07-26 22:58:33.258401] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:40.954 [2024-07-26 22:58:33.258802] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:40.954 [2024-07-26 22:58:33.258860] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x20aa120 0 00:28:40.954 [2024-07-26 22:58:33.273078] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:40.954 [2024-07-26 22:58:33.273101] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:40.954 [2024-07-26 22:58:33.273111] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:40.954 [2024-07-26 22:58:33.273118] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:40.954 [2024-07-26 22:58:33.273170] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.954 [2024-07-26 22:58:33.273184] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.954 [2024-07-26 22:58:33.273193] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20aa120) 00:28:40.954 [2024-07-26 22:58:33.273213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:40.954 [2024-07-26 22:58:33.273251] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21031f0, cid 0, qid 0 00:28:40.954 [2024-07-26 22:58:33.281074] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.954 [2024-07-26 22:58:33.281094] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.954 [2024-07-26 22:58:33.281102] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.954 [2024-07-26 22:58:33.281110] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21031f0) on tqpair=0x20aa120 00:28:40.954 [2024-07-26 22:58:33.281134] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:40.954 [2024-07-26 22:58:33.281146] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:40.954 [2024-07-26 22:58:33.281156] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:40.954 [2024-07-26 22:58:33.281187] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.954 [2024-07-26 22:58:33.281197] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.954 [2024-07-26 22:58:33.281204] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20aa120) 00:28:40.954 [2024-07-26 22:58:33.281215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.954 [2024-07-26 22:58:33.281239] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21031f0, cid 0, qid 0 00:28:40.954 [2024-07-26 22:58:33.281400] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.954 [2024-07-26 22:58:33.281416] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.954 [2024-07-26 22:58:33.281424] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.954 [2024-07-26 22:58:33.281432] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21031f0) on tqpair=0x20aa120 00:28:40.954 [2024-07-26 22:58:33.281447] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:40.954 [2024-07-26 22:58:33.281462] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:40.954 [2024-07-26 22:58:33.281474] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.954 [2024-07-26 22:58:33.281482] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.954 [2024-07-26 22:58:33.281489] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20aa120) 00:28:40.954 [2024-07-26 22:58:33.281500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.954 [2024-07-26 22:58:33.281522] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21031f0, cid 0, qid 0 00:28:40.954 [2024-07-26 22:58:33.281679] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.954 [2024-07-26 22:58:33.281694] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.954 [2024-07-26 22:58:33.281701] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.954 [2024-07-26 22:58:33.281709] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21031f0) on tqpair=0x20aa120 00:28:40.954 [2024-07-26 22:58:33.281720] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:40.954 [2024-07-26 22:58:33.281736] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:40.954 [2024-07-26 22:58:33.281749] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.954 [2024-07-26 22:58:33.281757] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.954 [2024-07-26 22:58:33.281765] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20aa120) 00:28:40.954 [2024-07-26 22:58:33.281776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.954 [2024-07-26 22:58:33.281798] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21031f0, cid 0, qid 0 00:28:40.954 [2024-07-26 22:58:33.281937] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.954 [2024-07-26 22:58:33.281953] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.954 [2024-07-26 22:58:33.281960] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.954 [2024-07-26 22:58:33.281967] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21031f0) on tqpair=0x20aa120 00:28:40.955 [2024-07-26 22:58:33.281978] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:40.955 [2024-07-26 22:58:33.281995] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.955 [2024-07-26 22:58:33.282004] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.955 [2024-07-26 22:58:33.282015] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20aa120) 00:28:40.955 [2024-07-26 22:58:33.282026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.955 [2024-07-26 22:58:33.282055] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21031f0, cid 0, qid 0 00:28:40.955 [2024-07-26 22:58:33.282201] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.955 [2024-07-26 22:58:33.282216] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.955 [2024-07-26 22:58:33.282223] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.955 [2024-07-26 22:58:33.282230] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21031f0) on tqpair=0x20aa120 00:28:40.955 [2024-07-26 22:58:33.282240] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:40.955 [2024-07-26 22:58:33.282249] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:40.955 [2024-07-26 22:58:33.282263] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:40.955 [2024-07-26 22:58:33.282373] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:40.955 [2024-07-26 22:58:33.282382] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:40.955 [2024-07-26 22:58:33.282398] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.955 [2024-07-26 22:58:33.282406] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.955 [2024-07-26 22:58:33.282412] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20aa120) 00:28:40.955 [2024-07-26 22:58:33.282423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.955 [2024-07-26 22:58:33.282444] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21031f0, cid 0, qid 0 00:28:40.955 [2024-07-26 22:58:33.282629] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.955 [2024-07-26 22:58:33.282642] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.955 [2024-07-26 22:58:33.282649] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.955 [2024-07-26 22:58:33.282655] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21031f0) on tqpair=0x20aa120 00:28:40.955 [2024-07-26 22:58:33.282666] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:40.955 [2024-07-26 22:58:33.282682] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.955 [2024-07-26 22:58:33.282691] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.955 [2024-07-26 22:58:33.282698] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20aa120) 00:28:40.955 [2024-07-26 22:58:33.282708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.955 [2024-07-26 22:58:33.282729] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21031f0, cid 0, qid 0 00:28:40.955 [2024-07-26 22:58:33.282865] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.955 [2024-07-26 22:58:33.282880] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.955 [2024-07-26 22:58:33.282887] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.955 [2024-07-26 22:58:33.282894] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21031f0) on tqpair=0x20aa120 00:28:40.955 [2024-07-26 22:58:33.282903] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:40.955 [2024-07-26 22:58:33.282912] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:40.955 [2024-07-26 22:58:33.282930] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:40.955 [2024-07-26 22:58:33.282945] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:40.955 [2024-07-26 22:58:33.282964] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.955 [2024-07-26 22:58:33.282973] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20aa120) 00:28:40.955 [2024-07-26 22:58:33.282984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.955 [2024-07-26 22:58:33.283019] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21031f0, cid 0, qid 0 00:28:40.955 [2024-07-26 22:58:33.283281] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:40.955 [2024-07-26 22:58:33.283298] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:40.955 [2024-07-26 22:58:33.283305] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:40.955 [2024-07-26 22:58:33.283312] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20aa120): datao=0, datal=4096, cccid=0 00:28:40.955 [2024-07-26 22:58:33.283320] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21031f0) on tqpair(0x20aa120): expected_datao=0, payload_size=4096 00:28:40.955 [2024-07-26 22:58:33.283329] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.955 [2024-07-26 22:58:33.283352] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:40.955 [2024-07-26 22:58:33.283362] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:40.955 [2024-07-26 22:58:33.283381] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.955 [2024-07-26 22:58:33.283392] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.955 [2024-07-26 22:58:33.283400] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.955 [2024-07-26 22:58:33.283407] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21031f0) on tqpair=0x20aa120 00:28:40.955 [2024-07-26 22:58:33.283427] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:40.955 [2024-07-26 22:58:33.283437] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:40.955 [2024-07-26 22:58:33.283445] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:40.955 [2024-07-26 22:58:33.283453] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:40.955 [2024-07-26 22:58:33.283462] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:40.955 [2024-07-26 22:58:33.283470] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:40.955 [2024-07-26 22:58:33.283485] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:40.955 [2024-07-26 22:58:33.283498] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.955 [2024-07-26 22:58:33.283506] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.955 [2024-07-26 22:58:33.283512] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20aa120) 00:28:40.955 [2024-07-26 22:58:33.283523] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:40.955 [2024-07-26 22:58:33.283544] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21031f0, cid 0, qid 0 00:28:40.955 [2024-07-26 22:58:33.283726] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.955 [2024-07-26 22:58:33.283742] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.955 [2024-07-26 22:58:33.283750] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.955 [2024-07-26 22:58:33.283757] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21031f0) on tqpair=0x20aa120 00:28:40.955 [2024-07-26 22:58:33.283773] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.955 [2024-07-26 22:58:33.283781] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.955 [2024-07-26 22:58:33.283788] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20aa120) 00:28:40.956 [2024-07-26 22:58:33.283798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.956 [2024-07-26 22:58:33.283809] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.956 [2024-07-26 22:58:33.283817] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.956 [2024-07-26 22:58:33.283823] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x20aa120) 00:28:40.956 [2024-07-26 22:58:33.283833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.956 [2024-07-26 22:58:33.283843] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.956 [2024-07-26 22:58:33.283850] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.956 [2024-07-26 22:58:33.283871] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x20aa120) 00:28:40.956 [2024-07-26 22:58:33.283881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.956 [2024-07-26 22:58:33.283891] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.956 [2024-07-26 22:58:33.283898] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.956 [2024-07-26 22:58:33.283904] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20aa120) 00:28:40.956 [2024-07-26 22:58:33.283913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.956 [2024-07-26 22:58:33.283922] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:40.956 [2024-07-26 22:58:33.283941] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:40.956 [2024-07-26 22:58:33.283953] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.956 [2024-07-26 22:58:33.283961] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20aa120) 00:28:40.956 [2024-07-26 22:58:33.283971] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.956 [2024-07-26 22:58:33.283993] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21031f0, cid 0, qid 0 00:28:40.956 [2024-07-26 22:58:33.284019] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2103350, cid 1, qid 0 00:28:40.956 [2024-07-26 22:58:33.284027] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21034b0, cid 2, qid 0 00:28:40.956 [2024-07-26 22:58:33.284035] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2103610, cid 3, qid 0 00:28:40.956 [2024-07-26 22:58:33.284043] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2103770, cid 4, qid 0 00:28:40.956 [2024-07-26 22:58:33.288071] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.956 [2024-07-26 22:58:33.288090] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.956 [2024-07-26 22:58:33.288098] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.956 [2024-07-26 22:58:33.288106] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2103770) on tqpair=0x20aa120 00:28:40.956 [2024-07-26 22:58:33.288117] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:40.956 [2024-07-26 22:58:33.288132] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:40.956 [2024-07-26 22:58:33.288152] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.956 [2024-07-26 22:58:33.288162] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20aa120) 00:28:40.956 [2024-07-26 22:58:33.288173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.956 [2024-07-26 22:58:33.288195] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2103770, cid 4, qid 0 00:28:40.956 [2024-07-26 22:58:33.288377] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:40.956 [2024-07-26 22:58:33.288392] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:40.956 [2024-07-26 22:58:33.288400] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:40.956 [2024-07-26 22:58:33.288407] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20aa120): datao=0, datal=4096, cccid=4 00:28:40.956 [2024-07-26 22:58:33.288415] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2103770) on tqpair(0x20aa120): expected_datao=0, payload_size=4096 00:28:40.956 [2024-07-26 22:58:33.288423] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.956 [2024-07-26 22:58:33.288449] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:40.956 [2024-07-26 22:58:33.288458] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:40.956 [2024-07-26 22:58:33.288554] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.956 [2024-07-26 22:58:33.288566] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.956 [2024-07-26 22:58:33.288574] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.956 [2024-07-26 22:58:33.288581] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2103770) on tqpair=0x20aa120 00:28:40.956 [2024-07-26 22:58:33.288601] nvme_ctrlr.c:4038:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:40.956 [2024-07-26 22:58:33.288651] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.956 [2024-07-26 22:58:33.288662] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20aa120) 00:28:40.956 [2024-07-26 22:58:33.288673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.956 [2024-07-26 22:58:33.288685] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.956 [2024-07-26 22:58:33.288692] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.956 [2024-07-26 22:58:33.288699] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20aa120) 00:28:40.956 [2024-07-26 22:58:33.288708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.956 [2024-07-26 22:58:33.288750] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2103770, cid 4, qid 0 00:28:40.956 [2024-07-26 22:58:33.288762] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21038d0, cid 5, qid 0 00:28:40.956 [2024-07-26 22:58:33.289023] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:40.956 [2024-07-26 22:58:33.289039] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:40.956 [2024-07-26 22:58:33.289046] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:40.956 [2024-07-26 22:58:33.289053] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20aa120): datao=0, datal=1024, cccid=4 00:28:40.956 [2024-07-26 22:58:33.289069] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2103770) on tqpair(0x20aa120): expected_datao=0, payload_size=1024 00:28:40.956 [2024-07-26 22:58:33.289077] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.956 [2024-07-26 22:58:33.289087] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:40.956 [2024-07-26 22:58:33.289095] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:40.956 [2024-07-26 22:58:33.289107] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.956 [2024-07-26 22:58:33.289117] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.956 [2024-07-26 22:58:33.289124] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.956 [2024-07-26 22:58:33.289131] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21038d0) on tqpair=0x20aa120 00:28:40.956 [2024-07-26 22:58:33.334071] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.956 [2024-07-26 22:58:33.334101] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.956 [2024-07-26 22:58:33.334124] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.956 [2024-07-26 22:58:33.334132] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2103770) on tqpair=0x20aa120 00:28:40.956 [2024-07-26 22:58:33.334157] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.956 [2024-07-26 22:58:33.334167] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20aa120) 00:28:40.956 [2024-07-26 22:58:33.334179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.957 [2024-07-26 22:58:33.334209] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2103770, cid 4, qid 0 00:28:40.957 [2024-07-26 22:58:33.334462] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:40.957 [2024-07-26 22:58:33.334478] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:40.957 [2024-07-26 22:58:33.334485] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:40.957 [2024-07-26 22:58:33.334492] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20aa120): datao=0, datal=3072, cccid=4 00:28:40.957 [2024-07-26 22:58:33.334500] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2103770) on tqpair(0x20aa120): expected_datao=0, payload_size=3072 00:28:40.957 [2024-07-26 22:58:33.334507] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.957 [2024-07-26 22:58:33.334518] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:40.957 [2024-07-26 22:58:33.334526] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:40.957 [2024-07-26 22:58:33.376208] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.957 [2024-07-26 22:58:33.376228] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.957 [2024-07-26 22:58:33.376235] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.957 [2024-07-26 22:58:33.376242] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2103770) on tqpair=0x20aa120 00:28:40.957 [2024-07-26 22:58:33.376259] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.957 [2024-07-26 22:58:33.376268] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20aa120) 00:28:40.957 [2024-07-26 22:58:33.376280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.957 [2024-07-26 22:58:33.376308] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2103770, cid 4, qid 0 00:28:40.957 [2024-07-26 22:58:33.376461] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:40.957 [2024-07-26 22:58:33.376477] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:40.957 [2024-07-26 22:58:33.376484] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:40.957 [2024-07-26 22:58:33.376490] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20aa120): datao=0, datal=8, cccid=4 00:28:40.957 [2024-07-26 22:58:33.376498] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2103770) on tqpair(0x20aa120): expected_datao=0, payload_size=8 00:28:40.957 [2024-07-26 22:58:33.376505] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.957 [2024-07-26 22:58:33.376516] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:40.957 [2024-07-26 22:58:33.376523] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:40.957 [2024-07-26 22:58:33.422078] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.957 [2024-07-26 22:58:33.422103] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.957 [2024-07-26 22:58:33.422127] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.957 [2024-07-26 22:58:33.422134] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2103770) on tqpair=0x20aa120 00:28:40.957 ===================================================== 00:28:40.957 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:40.957 ===================================================== 00:28:40.957 Controller Capabilities/Features 00:28:40.957 ================================ 00:28:40.957 Vendor ID: 0000 00:28:40.957 Subsystem Vendor ID: 0000 00:28:40.957 Serial Number: .................... 00:28:40.957 Model Number: ........................................ 00:28:40.957 Firmware Version: 24.05.1 00:28:40.957 Recommended Arb Burst: 0 00:28:40.957 IEEE OUI Identifier: 00 00 00 00:28:40.957 Multi-path I/O 00:28:40.957 May have multiple subsystem ports: No 00:28:40.957 May have multiple controllers: No 00:28:40.957 Associated with SR-IOV VF: No 00:28:40.957 Max Data Transfer Size: 131072 00:28:40.957 Max Number of Namespaces: 0 00:28:40.957 Max Number of I/O Queues: 1024 00:28:40.957 NVMe Specification Version (VS): 1.3 00:28:40.957 NVMe Specification Version (Identify): 1.3 00:28:40.957 Maximum Queue Entries: 128 00:28:40.957 Contiguous Queues Required: Yes 00:28:40.957 Arbitration Mechanisms Supported 00:28:40.957 Weighted Round Robin: Not Supported 00:28:40.957 Vendor Specific: Not Supported 00:28:40.957 Reset Timeout: 15000 ms 00:28:40.957 Doorbell Stride: 4 bytes 00:28:40.957 NVM Subsystem Reset: Not Supported 00:28:40.957 Command Sets Supported 00:28:40.957 NVM Command Set: Supported 00:28:40.957 Boot Partition: Not Supported 00:28:40.957 Memory Page Size Minimum: 4096 bytes 00:28:40.957 Memory Page Size Maximum: 4096 bytes 00:28:40.957 Persistent Memory Region: Not Supported 00:28:40.957 Optional Asynchronous Events Supported 00:28:40.957 Namespace Attribute Notices: Not Supported 00:28:40.957 Firmware Activation Notices: Not Supported 00:28:40.957 ANA Change Notices: Not Supported 00:28:40.957 PLE Aggregate Log Change Notices: Not Supported 00:28:40.957 LBA Status Info Alert Notices: Not Supported 00:28:40.957 EGE Aggregate Log Change Notices: Not Supported 00:28:40.957 Normal NVM Subsystem Shutdown event: Not Supported 00:28:40.957 Zone Descriptor Change Notices: Not Supported 00:28:40.957 Discovery Log Change Notices: Supported 00:28:40.957 Controller Attributes 00:28:40.957 128-bit Host Identifier: Not Supported 00:28:40.957 Non-Operational Permissive Mode: Not Supported 00:28:40.957 NVM Sets: Not Supported 00:28:40.957 Read Recovery Levels: Not Supported 00:28:40.957 Endurance Groups: Not Supported 00:28:40.957 Predictable Latency Mode: Not Supported 00:28:40.957 Traffic Based Keep ALive: Not Supported 00:28:40.957 Namespace Granularity: Not Supported 00:28:40.957 SQ Associations: Not Supported 00:28:40.957 UUID List: Not Supported 00:28:40.957 Multi-Domain Subsystem: Not Supported 00:28:40.957 Fixed Capacity Management: Not Supported 00:28:40.957 Variable Capacity Management: Not Supported 00:28:40.957 Delete Endurance Group: Not Supported 00:28:40.957 Delete NVM Set: Not Supported 00:28:40.958 Extended LBA Formats Supported: Not Supported 00:28:40.958 Flexible Data Placement Supported: Not Supported 00:28:40.958 00:28:40.958 Controller Memory Buffer Support 00:28:40.958 ================================ 00:28:40.958 Supported: No 00:28:40.958 00:28:40.958 Persistent Memory Region Support 00:28:40.958 ================================ 00:28:40.958 Supported: No 00:28:40.958 00:28:40.958 Admin Command Set Attributes 00:28:40.958 ============================ 00:28:40.958 Security Send/Receive: Not Supported 00:28:40.958 Format NVM: Not Supported 00:28:40.958 Firmware Activate/Download: Not Supported 00:28:40.958 Namespace Management: Not Supported 00:28:40.958 Device Self-Test: Not Supported 00:28:40.958 Directives: Not Supported 00:28:40.958 NVMe-MI: Not Supported 00:28:40.958 Virtualization Management: Not Supported 00:28:40.958 Doorbell Buffer Config: Not Supported 00:28:40.958 Get LBA Status Capability: Not Supported 00:28:40.958 Command & Feature Lockdown Capability: Not Supported 00:28:40.958 Abort Command Limit: 1 00:28:40.958 Async Event Request Limit: 4 00:28:40.958 Number of Firmware Slots: N/A 00:28:40.958 Firmware Slot 1 Read-Only: N/A 00:28:40.958 Firmware Activation Without Reset: N/A 00:28:40.958 Multiple Update Detection Support: N/A 00:28:40.958 Firmware Update Granularity: No Information Provided 00:28:40.958 Per-Namespace SMART Log: No 00:28:40.958 Asymmetric Namespace Access Log Page: Not Supported 00:28:40.958 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:40.958 Command Effects Log Page: Not Supported 00:28:40.958 Get Log Page Extended Data: Supported 00:28:40.958 Telemetry Log Pages: Not Supported 00:28:40.958 Persistent Event Log Pages: Not Supported 00:28:40.958 Supported Log Pages Log Page: May Support 00:28:40.958 Commands Supported & Effects Log Page: Not Supported 00:28:40.958 Feature Identifiers & Effects Log Page:May Support 00:28:40.958 NVMe-MI Commands & Effects Log Page: May Support 00:28:40.958 Data Area 4 for Telemetry Log: Not Supported 00:28:40.958 Error Log Page Entries Supported: 128 00:28:40.958 Keep Alive: Not Supported 00:28:40.958 00:28:40.958 NVM Command Set Attributes 00:28:40.958 ========================== 00:28:40.958 Submission Queue Entry Size 00:28:40.958 Max: 1 00:28:40.958 Min: 1 00:28:40.958 Completion Queue Entry Size 00:28:40.958 Max: 1 00:28:40.958 Min: 1 00:28:40.958 Number of Namespaces: 0 00:28:40.958 Compare Command: Not Supported 00:28:40.958 Write Uncorrectable Command: Not Supported 00:28:40.958 Dataset Management Command: Not Supported 00:28:40.958 Write Zeroes Command: Not Supported 00:28:40.958 Set Features Save Field: Not Supported 00:28:40.958 Reservations: Not Supported 00:28:40.958 Timestamp: Not Supported 00:28:40.958 Copy: Not Supported 00:28:40.958 Volatile Write Cache: Not Present 00:28:40.958 Atomic Write Unit (Normal): 1 00:28:40.958 Atomic Write Unit (PFail): 1 00:28:40.958 Atomic Compare & Write Unit: 1 00:28:40.958 Fused Compare & Write: Supported 00:28:40.958 Scatter-Gather List 00:28:40.958 SGL Command Set: Supported 00:28:40.958 SGL Keyed: Supported 00:28:40.958 SGL Bit Bucket Descriptor: Not Supported 00:28:40.958 SGL Metadata Pointer: Not Supported 00:28:40.958 Oversized SGL: Not Supported 00:28:40.958 SGL Metadata Address: Not Supported 00:28:40.958 SGL Offset: Supported 00:28:40.958 Transport SGL Data Block: Not Supported 00:28:40.958 Replay Protected Memory Block: Not Supported 00:28:40.958 00:28:40.958 Firmware Slot Information 00:28:40.958 ========================= 00:28:40.958 Active slot: 0 00:28:40.958 00:28:40.958 00:28:40.958 Error Log 00:28:40.958 ========= 00:28:40.958 00:28:40.958 Active Namespaces 00:28:40.958 ================= 00:28:40.958 Discovery Log Page 00:28:40.958 ================== 00:28:40.958 Generation Counter: 2 00:28:40.958 Number of Records: 2 00:28:40.958 Record Format: 0 00:28:40.958 00:28:40.958 Discovery Log Entry 0 00:28:40.958 ---------------------- 00:28:40.958 Transport Type: 3 (TCP) 00:28:40.958 Address Family: 1 (IPv4) 00:28:40.958 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:40.958 Entry Flags: 00:28:40.958 Duplicate Returned Information: 1 00:28:40.958 Explicit Persistent Connection Support for Discovery: 1 00:28:40.958 Transport Requirements: 00:28:40.958 Secure Channel: Not Required 00:28:40.958 Port ID: 0 (0x0000) 00:28:40.958 Controller ID: 65535 (0xffff) 00:28:40.958 Admin Max SQ Size: 128 00:28:40.958 Transport Service Identifier: 4420 00:28:40.958 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:40.958 Transport Address: 10.0.0.2 00:28:40.958 Discovery Log Entry 1 00:28:40.958 ---------------------- 00:28:40.958 Transport Type: 3 (TCP) 00:28:40.958 Address Family: 1 (IPv4) 00:28:40.958 Subsystem Type: 2 (NVM Subsystem) 00:28:40.958 Entry Flags: 00:28:40.958 Duplicate Returned Information: 0 00:28:40.958 Explicit Persistent Connection Support for Discovery: 0 00:28:40.958 Transport Requirements: 00:28:40.958 Secure Channel: Not Required 00:28:40.958 Port ID: 0 (0x0000) 00:28:40.958 Controller ID: 65535 (0xffff) 00:28:40.958 Admin Max SQ Size: 128 00:28:40.958 Transport Service Identifier: 4420 00:28:40.958 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:40.959 Transport Address: 10.0.0.2 [2024-07-26 22:58:33.422250] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:40.959 [2024-07-26 22:58:33.422276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.959 [2024-07-26 22:58:33.422290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.959 [2024-07-26 22:58:33.422300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.959 [2024-07-26 22:58:33.422310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.959 [2024-07-26 22:58:33.422328] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.959 [2024-07-26 22:58:33.422338] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.959 [2024-07-26 22:58:33.422345] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20aa120) 00:28:40.959 [2024-07-26 22:58:33.422356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.959 [2024-07-26 22:58:33.422380] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2103610, cid 3, qid 0 00:28:40.959 [2024-07-26 22:58:33.422518] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.959 [2024-07-26 22:58:33.422531] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.959 [2024-07-26 22:58:33.422538] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.959 [2024-07-26 22:58:33.422545] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2103610) on tqpair=0x20aa120 00:28:40.959 [2024-07-26 22:58:33.422558] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.959 [2024-07-26 22:58:33.422567] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.959 [2024-07-26 22:58:33.422574] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20aa120) 00:28:40.959 [2024-07-26 22:58:33.422584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.959 [2024-07-26 22:58:33.422610] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2103610, cid 3, qid 0 00:28:40.959 [2024-07-26 22:58:33.422759] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.959 [2024-07-26 22:58:33.422775] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.959 [2024-07-26 22:58:33.422782] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.959 [2024-07-26 22:58:33.422789] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2103610) on tqpair=0x20aa120 00:28:40.959 [2024-07-26 22:58:33.422799] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:40.959 [2024-07-26 22:58:33.422807] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:40.959 [2024-07-26 22:58:33.422824] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.959 [2024-07-26 22:58:33.422833] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.959 [2024-07-26 22:58:33.422840] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20aa120) 00:28:40.959 [2024-07-26 22:58:33.422850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.959 [2024-07-26 22:58:33.422871] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2103610, cid 3, qid 0 00:28:40.959 [2024-07-26 22:58:33.423040] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.959 [2024-07-26 22:58:33.423066] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.959 [2024-07-26 22:58:33.423075] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.959 [2024-07-26 22:58:33.423082] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2103610) on tqpair=0x20aa120 00:28:40.959 [2024-07-26 22:58:33.423102] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.959 [2024-07-26 22:58:33.423111] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.959 [2024-07-26 22:58:33.423118] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20aa120) 00:28:40.959 [2024-07-26 22:58:33.423128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.959 [2024-07-26 22:58:33.423150] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2103610, cid 3, qid 0 00:28:40.959 [2024-07-26 22:58:33.423280] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.959 [2024-07-26 22:58:33.423292] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.959 [2024-07-26 22:58:33.423299] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.959 [2024-07-26 22:58:33.423306] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2103610) on tqpair=0x20aa120 00:28:40.959 [2024-07-26 22:58:33.423323] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.959 [2024-07-26 22:58:33.423333] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.959 [2024-07-26 22:58:33.423339] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20aa120) 00:28:40.959 [2024-07-26 22:58:33.423349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.959 [2024-07-26 22:58:33.423370] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2103610, cid 3, qid 0 00:28:40.959 [2024-07-26 22:58:33.423500] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.959 [2024-07-26 22:58:33.423515] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.959 [2024-07-26 22:58:33.423522] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.959 [2024-07-26 22:58:33.423529] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2103610) on tqpair=0x20aa120 00:28:40.959 [2024-07-26 22:58:33.423547] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.959 [2024-07-26 22:58:33.423557] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.959 [2024-07-26 22:58:33.423563] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20aa120) 00:28:40.959 [2024-07-26 22:58:33.423574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.959 [2024-07-26 22:58:33.423594] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2103610, cid 3, qid 0 00:28:40.959 [2024-07-26 22:58:33.423723] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.959 [2024-07-26 22:58:33.423735] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.959 [2024-07-26 22:58:33.423742] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.959 [2024-07-26 22:58:33.423749] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2103610) on tqpair=0x20aa120 00:28:40.959 [2024-07-26 22:58:33.423766] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.959 [2024-07-26 22:58:33.423775] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.959 [2024-07-26 22:58:33.423782] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20aa120) 00:28:40.959 [2024-07-26 22:58:33.423793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.959 [2024-07-26 22:58:33.423813] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2103610, cid 3, qid 0 00:28:40.959 [2024-07-26 22:58:33.423947] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.959 [2024-07-26 22:58:33.423963] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.959 [2024-07-26 22:58:33.423973] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.960 [2024-07-26 22:58:33.423981] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2103610) on tqpair=0x20aa120 00:28:40.960 [2024-07-26 22:58:33.423999] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.960 [2024-07-26 22:58:33.424009] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.960 [2024-07-26 22:58:33.424015] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20aa120) 00:28:40.960 [2024-07-26 22:58:33.424026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.960 [2024-07-26 22:58:33.424047] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2103610, cid 3, qid 0 00:28:40.960 [2024-07-26 22:58:33.424187] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.960 [2024-07-26 22:58:33.424200] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.960 [2024-07-26 22:58:33.424207] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.960 [2024-07-26 22:58:33.424214] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2103610) on tqpair=0x20aa120 00:28:40.960 [2024-07-26 22:58:33.424231] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.960 [2024-07-26 22:58:33.424240] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.960 [2024-07-26 22:58:33.424247] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20aa120) 00:28:40.960 [2024-07-26 22:58:33.424257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.960 [2024-07-26 22:58:33.424278] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2103610, cid 3, qid 0 00:28:40.960 [2024-07-26 22:58:33.424411] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.960 [2024-07-26 22:58:33.424426] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.960 [2024-07-26 22:58:33.424433] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.960 [2024-07-26 22:58:33.424440] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2103610) on tqpair=0x20aa120 00:28:40.960 [2024-07-26 22:58:33.424458] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.960 [2024-07-26 22:58:33.424467] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.960 [2024-07-26 22:58:33.424474] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20aa120) 00:28:40.960 [2024-07-26 22:58:33.424485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.960 [2024-07-26 22:58:33.424505] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2103610, cid 3, qid 0 00:28:40.960 [2024-07-26 22:58:33.424634] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.960 [2024-07-26 22:58:33.424649] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.960 [2024-07-26 22:58:33.424657] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.960 [2024-07-26 22:58:33.424663] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2103610) on tqpair=0x20aa120 00:28:40.960 [2024-07-26 22:58:33.424681] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.960 [2024-07-26 22:58:33.424691] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.960 [2024-07-26 22:58:33.424697] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20aa120) 00:28:40.960 [2024-07-26 22:58:33.424708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.960 [2024-07-26 22:58:33.424728] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2103610, cid 3, qid 0 00:28:40.960 [2024-07-26 22:58:33.424857] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.960 [2024-07-26 22:58:33.424870] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.960 [2024-07-26 22:58:33.424880] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.961 [2024-07-26 22:58:33.424888] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2103610) on tqpair=0x20aa120 00:28:40.961 [2024-07-26 22:58:33.424905] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.961 [2024-07-26 22:58:33.424915] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.961 [2024-07-26 22:58:33.424921] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20aa120) 00:28:40.961 [2024-07-26 22:58:33.424932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.961 [2024-07-26 22:58:33.424952] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2103610, cid 3, qid 0 00:28:40.961 [2024-07-26 22:58:33.425085] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.961 [2024-07-26 22:58:33.425101] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.961 [2024-07-26 22:58:33.425108] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.961 [2024-07-26 22:58:33.425115] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2103610) on tqpair=0x20aa120 00:28:40.961 [2024-07-26 22:58:33.425133] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.961 [2024-07-26 22:58:33.425142] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.961 [2024-07-26 22:58:33.425149] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20aa120) 00:28:40.961 [2024-07-26 22:58:33.425160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.961 [2024-07-26 22:58:33.425181] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2103610, cid 3, qid 0 00:28:40.961 [2024-07-26 22:58:33.425315] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.961 [2024-07-26 22:58:33.425330] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.961 [2024-07-26 22:58:33.425337] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.961 [2024-07-26 22:58:33.425344] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2103610) on tqpair=0x20aa120 00:28:40.961 [2024-07-26 22:58:33.425362] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.961 [2024-07-26 22:58:33.425372] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.961 [2024-07-26 22:58:33.425378] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20aa120) 00:28:40.961 [2024-07-26 22:58:33.425389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.961 [2024-07-26 22:58:33.425410] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2103610, cid 3, qid 0 00:28:40.961 [2024-07-26 22:58:33.425539] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.961 [2024-07-26 22:58:33.425551] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.961 [2024-07-26 22:58:33.425558] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.961 [2024-07-26 22:58:33.425565] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2103610) on tqpair=0x20aa120 00:28:40.961 [2024-07-26 22:58:33.425582] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.961 [2024-07-26 22:58:33.425592] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.961 [2024-07-26 22:58:33.425599] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20aa120) 00:28:40.961 [2024-07-26 22:58:33.425609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.961 [2024-07-26 22:58:33.425629] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2103610, cid 3, qid 0 00:28:40.961 [2024-07-26 22:58:33.425761] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.961 [2024-07-26 22:58:33.425773] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.961 [2024-07-26 22:58:33.425780] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.961 [2024-07-26 22:58:33.425790] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2103610) on tqpair=0x20aa120 00:28:40.961 [2024-07-26 22:58:33.425808] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.961 [2024-07-26 22:58:33.425818] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.961 [2024-07-26 22:58:33.425825] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20aa120) 00:28:40.961 [2024-07-26 22:58:33.425835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.961 [2024-07-26 22:58:33.425866] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2103610, cid 3, qid 0 00:28:40.961 [2024-07-26 22:58:33.425991] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.961 [2024-07-26 22:58:33.426004] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.961 [2024-07-26 22:58:33.426011] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.961 [2024-07-26 22:58:33.426018] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2103610) on tqpair=0x20aa120 00:28:40.961 [2024-07-26 22:58:33.426036] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.961 [2024-07-26 22:58:33.426045] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.961 [2024-07-26 22:58:33.426052] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20aa120) 00:28:40.961 [2024-07-26 22:58:33.430087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.961 [2024-07-26 22:58:33.430119] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2103610, cid 3, qid 0 00:28:40.961 [2024-07-26 22:58:33.430303] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.961 [2024-07-26 22:58:33.430316] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.961 [2024-07-26 22:58:33.430323] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.961 [2024-07-26 22:58:33.430330] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2103610) on tqpair=0x20aa120 00:28:40.961 [2024-07-26 22:58:33.430344] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:28:40.961 00:28:40.961 22:58:33 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:41.270 [2024-07-26 22:58:33.464138] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:28:41.270 [2024-07-26 22:58:33.464184] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3637215 ] 00:28:41.270 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.270 [2024-07-26 22:58:33.496908] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:41.270 [2024-07-26 22:58:33.496956] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:41.270 [2024-07-26 22:58:33.496965] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:41.270 [2024-07-26 22:58:33.496980] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:41.270 [2024-07-26 22:58:33.496991] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:41.270 [2024-07-26 22:58:33.500114] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:41.270 [2024-07-26 22:58:33.500153] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x666120 0 00:28:41.270 [2024-07-26 22:58:33.508077] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:41.270 [2024-07-26 22:58:33.508095] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:41.270 [2024-07-26 22:58:33.508103] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:41.270 [2024-07-26 22:58:33.508109] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:41.270 [2024-07-26 22:58:33.508157] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.270 [2024-07-26 22:58:33.508169] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.270 [2024-07-26 22:58:33.508176] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x666120) 00:28:41.270 [2024-07-26 22:58:33.508190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:41.270 [2024-07-26 22:58:33.508216] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf1f0, cid 0, qid 0 00:28:41.270 [2024-07-26 22:58:33.516072] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.270 [2024-07-26 22:58:33.516090] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.270 [2024-07-26 22:58:33.516098] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.270 [2024-07-26 22:58:33.516105] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf1f0) on tqpair=0x666120 00:28:41.270 [2024-07-26 22:58:33.516119] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:41.270 [2024-07-26 22:58:33.516129] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:41.270 [2024-07-26 22:58:33.516139] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:41.270 [2024-07-26 22:58:33.516159] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.270 [2024-07-26 22:58:33.516169] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.270 [2024-07-26 22:58:33.516176] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x666120) 00:28:41.270 [2024-07-26 22:58:33.516187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.270 [2024-07-26 22:58:33.516211] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf1f0, cid 0, qid 0 00:28:41.270 [2024-07-26 22:58:33.516394] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.270 [2024-07-26 22:58:33.516407] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.270 [2024-07-26 22:58:33.516414] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.270 [2024-07-26 22:58:33.516421] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf1f0) on tqpair=0x666120 00:28:41.270 [2024-07-26 22:58:33.516432] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:41.270 [2024-07-26 22:58:33.516446] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:41.270 [2024-07-26 22:58:33.516459] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.270 [2024-07-26 22:58:33.516466] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.270 [2024-07-26 22:58:33.516473] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x666120) 00:28:41.270 [2024-07-26 22:58:33.516483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.270 [2024-07-26 22:58:33.516504] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf1f0, cid 0, qid 0 00:28:41.270 [2024-07-26 22:58:33.516638] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.270 [2024-07-26 22:58:33.516652] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.270 [2024-07-26 22:58:33.516659] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.270 [2024-07-26 22:58:33.516666] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf1f0) on tqpair=0x666120 00:28:41.270 [2024-07-26 22:58:33.516678] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:41.270 [2024-07-26 22:58:33.516693] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:41.270 [2024-07-26 22:58:33.516705] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.270 [2024-07-26 22:58:33.516713] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.270 [2024-07-26 22:58:33.516719] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x666120) 00:28:41.270 [2024-07-26 22:58:33.516729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.270 [2024-07-26 22:58:33.516750] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf1f0, cid 0, qid 0 00:28:41.270 [2024-07-26 22:58:33.516920] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.270 [2024-07-26 22:58:33.516935] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.270 [2024-07-26 22:58:33.516942] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.270 [2024-07-26 22:58:33.516948] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf1f0) on tqpair=0x666120 00:28:41.270 [2024-07-26 22:58:33.516957] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:41.270 [2024-07-26 22:58:33.516974] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.270 [2024-07-26 22:58:33.516983] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.270 [2024-07-26 22:58:33.516989] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x666120) 00:28:41.270 [2024-07-26 22:58:33.516999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.270 [2024-07-26 22:58:33.517020] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf1f0, cid 0, qid 0 00:28:41.270 [2024-07-26 22:58:33.517192] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.270 [2024-07-26 22:58:33.517206] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.270 [2024-07-26 22:58:33.517213] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.270 [2024-07-26 22:58:33.517220] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf1f0) on tqpair=0x666120 00:28:41.270 [2024-07-26 22:58:33.517228] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:41.270 [2024-07-26 22:58:33.517237] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:41.270 [2024-07-26 22:58:33.517250] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:41.270 [2024-07-26 22:58:33.517374] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:41.270 [2024-07-26 22:58:33.517381] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:41.271 [2024-07-26 22:58:33.517394] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.271 [2024-07-26 22:58:33.517401] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.271 [2024-07-26 22:58:33.517407] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x666120) 00:28:41.271 [2024-07-26 22:58:33.517418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.271 [2024-07-26 22:58:33.517439] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf1f0, cid 0, qid 0 00:28:41.271 [2024-07-26 22:58:33.517608] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.271 [2024-07-26 22:58:33.517623] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.271 [2024-07-26 22:58:33.517633] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.271 [2024-07-26 22:58:33.517640] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf1f0) on tqpair=0x666120 00:28:41.271 [2024-07-26 22:58:33.517648] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:41.271 [2024-07-26 22:58:33.517665] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.271 [2024-07-26 22:58:33.517674] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.271 [2024-07-26 22:58:33.517681] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x666120) 00:28:41.271 [2024-07-26 22:58:33.517691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.271 [2024-07-26 22:58:33.517711] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf1f0, cid 0, qid 0 00:28:41.271 [2024-07-26 22:58:33.517842] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.271 [2024-07-26 22:58:33.517857] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.271 [2024-07-26 22:58:33.517864] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.271 [2024-07-26 22:58:33.517870] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf1f0) on tqpair=0x666120 00:28:41.271 [2024-07-26 22:58:33.517878] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:41.271 [2024-07-26 22:58:33.517886] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:41.271 [2024-07-26 22:58:33.517899] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:41.271 [2024-07-26 22:58:33.517913] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:41.271 [2024-07-26 22:58:33.517928] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.271 [2024-07-26 22:58:33.517937] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x666120) 00:28:41.271 [2024-07-26 22:58:33.517947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.271 [2024-07-26 22:58:33.517968] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf1f0, cid 0, qid 0 00:28:41.271 [2024-07-26 22:58:33.518157] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:41.271 [2024-07-26 22:58:33.518172] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:41.271 [2024-07-26 22:58:33.518179] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:41.271 [2024-07-26 22:58:33.518185] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x666120): datao=0, datal=4096, cccid=0 00:28:41.271 [2024-07-26 22:58:33.518193] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6bf1f0) on tqpair(0x666120): expected_datao=0, payload_size=4096 00:28:41.271 [2024-07-26 22:58:33.518201] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.271 [2024-07-26 22:58:33.518211] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:41.271 [2024-07-26 22:58:33.518219] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:41.271 [2024-07-26 22:58:33.518258] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.271 [2024-07-26 22:58:33.518270] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.271 [2024-07-26 22:58:33.518276] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.271 [2024-07-26 22:58:33.518283] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf1f0) on tqpair=0x666120 00:28:41.271 [2024-07-26 22:58:33.518298] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:41.271 [2024-07-26 22:58:33.518308] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:41.271 [2024-07-26 22:58:33.518318] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:41.271 [2024-07-26 22:58:33.518326] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:41.271 [2024-07-26 22:58:33.518333] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:41.271 [2024-07-26 22:58:33.518341] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:41.271 [2024-07-26 22:58:33.518356] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:41.271 [2024-07-26 22:58:33.518368] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.271 [2024-07-26 22:58:33.518376] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.271 [2024-07-26 22:58:33.518382] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x666120) 00:28:41.271 [2024-07-26 22:58:33.518393] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:41.271 [2024-07-26 22:58:33.518415] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf1f0, cid 0, qid 0 00:28:41.271 [2024-07-26 22:58:33.518585] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.271 [2024-07-26 22:58:33.518600] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.271 [2024-07-26 22:58:33.518607] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.271 [2024-07-26 22:58:33.518613] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf1f0) on tqpair=0x666120 00:28:41.271 [2024-07-26 22:58:33.518624] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.271 [2024-07-26 22:58:33.518631] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.271 [2024-07-26 22:58:33.518637] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x666120) 00:28:41.271 [2024-07-26 22:58:33.518647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:41.271 [2024-07-26 22:58:33.518657] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.271 [2024-07-26 22:58:33.518664] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.271 [2024-07-26 22:58:33.518670] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x666120) 00:28:41.271 [2024-07-26 22:58:33.518679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:41.271 [2024-07-26 22:58:33.518688] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.271 [2024-07-26 22:58:33.518695] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.271 [2024-07-26 22:58:33.518701] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x666120) 00:28:41.271 [2024-07-26 22:58:33.518709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:41.271 [2024-07-26 22:58:33.518719] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.271 [2024-07-26 22:58:33.518725] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.271 [2024-07-26 22:58:33.518731] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x666120) 00:28:41.271 [2024-07-26 22:58:33.518740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:41.271 [2024-07-26 22:58:33.518748] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:41.271 [2024-07-26 22:58:33.518767] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:41.271 [2024-07-26 22:58:33.518779] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.271 [2024-07-26 22:58:33.518789] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x666120) 00:28:41.271 [2024-07-26 22:58:33.518799] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.271 [2024-07-26 22:58:33.518821] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf1f0, cid 0, qid 0 00:28:41.271 [2024-07-26 22:58:33.518832] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf350, cid 1, qid 0 00:28:41.271 [2024-07-26 22:58:33.518839] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf4b0, cid 2, qid 0 00:28:41.271 [2024-07-26 22:58:33.518847] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf610, cid 3, qid 0 00:28:41.271 [2024-07-26 22:58:33.518854] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf770, cid 4, qid 0 00:28:41.271 [2024-07-26 22:58:33.519065] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.271 [2024-07-26 22:58:33.519081] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.271 [2024-07-26 22:58:33.519088] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.271 [2024-07-26 22:58:33.519095] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf770) on tqpair=0x666120 00:28:41.272 [2024-07-26 22:58:33.519103] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:41.272 [2024-07-26 22:58:33.519113] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:41.272 [2024-07-26 22:58:33.519128] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:41.272 [2024-07-26 22:58:33.519141] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:41.272 [2024-07-26 22:58:33.519152] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.272 [2024-07-26 22:58:33.519159] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.272 [2024-07-26 22:58:33.519166] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x666120) 00:28:41.272 [2024-07-26 22:58:33.519177] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:41.272 [2024-07-26 22:58:33.519198] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf770, cid 4, qid 0 00:28:41.272 [2024-07-26 22:58:33.519368] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.272 [2024-07-26 22:58:33.519396] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.272 [2024-07-26 22:58:33.519403] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.272 [2024-07-26 22:58:33.519410] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf770) on tqpair=0x666120 00:28:41.272 [2024-07-26 22:58:33.519477] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:41.272 [2024-07-26 22:58:33.519497] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:41.272 [2024-07-26 22:58:33.519512] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.272 [2024-07-26 22:58:33.519519] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x666120) 00:28:41.272 [2024-07-26 22:58:33.519530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-07-26 22:58:33.519550] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf770, cid 4, qid 0 00:28:41.272 [2024-07-26 22:58:33.519697] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:41.272 [2024-07-26 22:58:33.519711] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:41.272 [2024-07-26 22:58:33.519718] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:41.272 [2024-07-26 22:58:33.519728] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x666120): datao=0, datal=4096, cccid=4 00:28:41.272 [2024-07-26 22:58:33.519736] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6bf770) on tqpair(0x666120): expected_datao=0, payload_size=4096 00:28:41.272 [2024-07-26 22:58:33.519743] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.272 [2024-07-26 22:58:33.519768] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:41.272 [2024-07-26 22:58:33.519792] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:41.272 [2024-07-26 22:58:33.519924] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.272 [2024-07-26 22:58:33.519939] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.272 [2024-07-26 22:58:33.519945] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.272 [2024-07-26 22:58:33.519952] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf770) on tqpair=0x666120 00:28:41.272 [2024-07-26 22:58:33.519968] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:41.272 [2024-07-26 22:58:33.519990] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:41.272 [2024-07-26 22:58:33.520009] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:41.272 [2024-07-26 22:58:33.520023] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.272 [2024-07-26 22:58:33.520030] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x666120) 00:28:41.272 [2024-07-26 22:58:33.520041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-07-26 22:58:33.524068] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf770, cid 4, qid 0 00:28:41.272 [2024-07-26 22:58:33.524089] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:41.272 [2024-07-26 22:58:33.524100] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:41.272 [2024-07-26 22:58:33.524107] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:41.272 [2024-07-26 22:58:33.524113] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x666120): datao=0, datal=4096, cccid=4 00:28:41.272 [2024-07-26 22:58:33.524121] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6bf770) on tqpair(0x666120): expected_datao=0, payload_size=4096 00:28:41.272 [2024-07-26 22:58:33.524128] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.272 [2024-07-26 22:58:33.524138] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:41.272 [2024-07-26 22:58:33.524146] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:41.272 [2024-07-26 22:58:33.564071] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.272 [2024-07-26 22:58:33.564090] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.272 [2024-07-26 22:58:33.564097] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.272 [2024-07-26 22:58:33.564105] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf770) on tqpair=0x666120 00:28:41.272 [2024-07-26 22:58:33.564129] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:41.272 [2024-07-26 22:58:33.564150] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:41.272 [2024-07-26 22:58:33.564165] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.272 [2024-07-26 22:58:33.564173] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x666120) 00:28:41.272 [2024-07-26 22:58:33.564185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-07-26 22:58:33.564215] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf770, cid 4, qid 0 00:28:41.272 [2024-07-26 22:58:33.564362] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:41.272 [2024-07-26 22:58:33.564375] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:41.272 [2024-07-26 22:58:33.564382] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:41.272 [2024-07-26 22:58:33.564388] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x666120): datao=0, datal=4096, cccid=4 00:28:41.272 [2024-07-26 22:58:33.564396] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6bf770) on tqpair(0x666120): expected_datao=0, payload_size=4096 00:28:41.272 [2024-07-26 22:58:33.564404] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.272 [2024-07-26 22:58:33.564425] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:41.272 [2024-07-26 22:58:33.564434] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:41.272 [2024-07-26 22:58:33.605178] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.272 [2024-07-26 22:58:33.605197] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.272 [2024-07-26 22:58:33.605205] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.273 [2024-07-26 22:58:33.605212] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf770) on tqpair=0x666120 00:28:41.273 [2024-07-26 22:58:33.605227] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:41.273 [2024-07-26 22:58:33.605243] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:41.273 [2024-07-26 22:58:33.605259] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:41.273 [2024-07-26 22:58:33.605272] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:41.273 [2024-07-26 22:58:33.605281] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:41.273 [2024-07-26 22:58:33.605290] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:41.273 [2024-07-26 22:58:33.605298] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:41.273 [2024-07-26 22:58:33.605306] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:41.273 [2024-07-26 22:58:33.605329] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.273 [2024-07-26 22:58:33.605339] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x666120) 00:28:41.273 [2024-07-26 22:58:33.605365] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.273 [2024-07-26 22:58:33.605377] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.273 [2024-07-26 22:58:33.605384] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.273 [2024-07-26 22:58:33.605391] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x666120) 00:28:41.273 [2024-07-26 22:58:33.605400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:41.273 [2024-07-26 22:58:33.605425] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf770, cid 4, qid 0 00:28:41.273 [2024-07-26 22:58:33.605436] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf8d0, cid 5, qid 0 00:28:41.273 [2024-07-26 22:58:33.605579] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.273 [2024-07-26 22:58:33.605592] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.273 [2024-07-26 22:58:33.605598] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.273 [2024-07-26 22:58:33.605605] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf770) on tqpair=0x666120 00:28:41.273 [2024-07-26 22:58:33.605619] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.273 [2024-07-26 22:58:33.605628] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.273 [2024-07-26 22:58:33.605635] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.273 [2024-07-26 22:58:33.605641] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf8d0) on tqpair=0x666120 00:28:41.273 [2024-07-26 22:58:33.605657] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.273 [2024-07-26 22:58:33.605665] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x666120) 00:28:41.273 [2024-07-26 22:58:33.605675] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.273 [2024-07-26 22:58:33.605695] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf8d0, cid 5, qid 0 00:28:41.273 [2024-07-26 22:58:33.605826] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.273 [2024-07-26 22:58:33.605841] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.273 [2024-07-26 22:58:33.605847] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.273 [2024-07-26 22:58:33.605854] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf8d0) on tqpair=0x666120 00:28:41.273 [2024-07-26 22:58:33.605870] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.273 [2024-07-26 22:58:33.605879] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x666120) 00:28:41.273 [2024-07-26 22:58:33.605889] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.273 [2024-07-26 22:58:33.605909] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf8d0, cid 5, qid 0 00:28:41.273 [2024-07-26 22:58:33.606037] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.273 [2024-07-26 22:58:33.606074] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.273 [2024-07-26 22:58:33.606082] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.273 [2024-07-26 22:58:33.606089] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf8d0) on tqpair=0x666120 00:28:41.273 [2024-07-26 22:58:33.606106] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.273 [2024-07-26 22:58:33.606115] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x666120) 00:28:41.273 [2024-07-26 22:58:33.606126] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.273 [2024-07-26 22:58:33.606147] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf8d0, cid 5, qid 0 00:28:41.273 [2024-07-26 22:58:33.606280] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.273 [2024-07-26 22:58:33.606292] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.273 [2024-07-26 22:58:33.606299] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.273 [2024-07-26 22:58:33.606306] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf8d0) on tqpair=0x666120 00:28:41.273 [2024-07-26 22:58:33.606324] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.273 [2024-07-26 22:58:33.606334] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x666120) 00:28:41.273 [2024-07-26 22:58:33.606360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.273 [2024-07-26 22:58:33.606371] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.273 [2024-07-26 22:58:33.606379] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x666120) 00:28:41.273 [2024-07-26 22:58:33.606388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.273 [2024-07-26 22:58:33.606403] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.273 [2024-07-26 22:58:33.606411] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x666120) 00:28:41.273 [2024-07-26 22:58:33.606421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.273 [2024-07-26 22:58:33.606431] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.273 [2024-07-26 22:58:33.606439] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x666120) 00:28:41.273 [2024-07-26 22:58:33.606448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.273 [2024-07-26 22:58:33.606469] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf8d0, cid 5, qid 0 00:28:41.273 [2024-07-26 22:58:33.606480] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf770, cid 4, qid 0 00:28:41.273 [2024-07-26 22:58:33.606488] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bfa30, cid 6, qid 0 00:28:41.274 [2024-07-26 22:58:33.606495] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bfb90, cid 7, qid 0 00:28:41.274 [2024-07-26 22:58:33.606823] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:41.274 [2024-07-26 22:58:33.606839] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:41.274 [2024-07-26 22:58:33.606845] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:41.274 [2024-07-26 22:58:33.606852] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x666120): datao=0, datal=8192, cccid=5 00:28:41.274 [2024-07-26 22:58:33.606859] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6bf8d0) on tqpair(0x666120): expected_datao=0, payload_size=8192 00:28:41.274 [2024-07-26 22:58:33.606866] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.274 [2024-07-26 22:58:33.606877] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:41.274 [2024-07-26 22:58:33.606884] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:41.274 [2024-07-26 22:58:33.606893] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:41.274 [2024-07-26 22:58:33.606901] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:41.274 [2024-07-26 22:58:33.606907] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:41.274 [2024-07-26 22:58:33.606914] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x666120): datao=0, datal=512, cccid=4 00:28:41.274 [2024-07-26 22:58:33.606921] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6bf770) on tqpair(0x666120): expected_datao=0, payload_size=512 00:28:41.274 [2024-07-26 22:58:33.606928] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.274 [2024-07-26 22:58:33.606937] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:41.274 [2024-07-26 22:58:33.606944] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:41.274 [2024-07-26 22:58:33.606953] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:41.274 [2024-07-26 22:58:33.606961] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:41.274 [2024-07-26 22:58:33.606967] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:41.274 [2024-07-26 22:58:33.606973] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x666120): datao=0, datal=512, cccid=6 00:28:41.274 [2024-07-26 22:58:33.606981] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6bfa30) on tqpair(0x666120): expected_datao=0, payload_size=512 00:28:41.274 [2024-07-26 22:58:33.606988] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.274 [2024-07-26 22:58:33.606996] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:41.274 [2024-07-26 22:58:33.607003] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:41.274 [2024-07-26 22:58:33.607011] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:41.274 [2024-07-26 22:58:33.607020] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:41.274 [2024-07-26 22:58:33.607029] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:41.274 [2024-07-26 22:58:33.607051] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x666120): datao=0, datal=4096, cccid=7 00:28:41.274 [2024-07-26 22:58:33.611068] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6bfb90) on tqpair(0x666120): expected_datao=0, payload_size=4096 00:28:41.274 [2024-07-26 22:58:33.611080] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.274 [2024-07-26 22:58:33.611092] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:41.274 [2024-07-26 22:58:33.611099] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:41.274 [2024-07-26 22:58:33.611111] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.274 [2024-07-26 22:58:33.611121] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.274 [2024-07-26 22:58:33.611127] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.274 [2024-07-26 22:58:33.611134] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf8d0) on tqpair=0x666120 00:28:41.274 [2024-07-26 22:58:33.611153] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.274 [2024-07-26 22:58:33.611164] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.274 [2024-07-26 22:58:33.611171] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.274 [2024-07-26 22:58:33.611177] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf770) on tqpair=0x666120 00:28:41.274 [2024-07-26 22:58:33.611190] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.274 [2024-07-26 22:58:33.611200] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.274 [2024-07-26 22:58:33.611206] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.274 [2024-07-26 22:58:33.611213] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bfa30) on tqpair=0x666120 00:28:41.274 [2024-07-26 22:58:33.611226] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.274 [2024-07-26 22:58:33.611236] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.274 [2024-07-26 22:58:33.611243] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.274 [2024-07-26 22:58:33.611249] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bfb90) on tqpair=0x666120 00:28:41.274 ===================================================== 00:28:41.274 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:41.274 ===================================================== 00:28:41.274 Controller Capabilities/Features 00:28:41.274 ================================ 00:28:41.274 Vendor ID: 8086 00:28:41.274 Subsystem Vendor ID: 8086 00:28:41.274 Serial Number: SPDK00000000000001 00:28:41.274 Model Number: SPDK bdev Controller 00:28:41.274 Firmware Version: 24.05.1 00:28:41.274 Recommended Arb Burst: 6 00:28:41.274 IEEE OUI Identifier: e4 d2 5c 00:28:41.274 Multi-path I/O 00:28:41.274 May have multiple subsystem ports: Yes 00:28:41.274 May have multiple controllers: Yes 00:28:41.274 Associated with SR-IOV VF: No 00:28:41.274 Max Data Transfer Size: 131072 00:28:41.274 Max Number of Namespaces: 32 00:28:41.274 Max Number of I/O Queues: 127 00:28:41.274 NVMe Specification Version (VS): 1.3 00:28:41.274 NVMe Specification Version (Identify): 1.3 00:28:41.274 Maximum Queue Entries: 128 00:28:41.274 Contiguous Queues Required: Yes 00:28:41.274 Arbitration Mechanisms Supported 00:28:41.274 Weighted Round Robin: Not Supported 00:28:41.274 Vendor Specific: Not Supported 00:28:41.274 Reset Timeout: 15000 ms 00:28:41.274 Doorbell Stride: 4 bytes 00:28:41.274 NVM Subsystem Reset: Not Supported 00:28:41.274 Command Sets Supported 00:28:41.274 NVM Command Set: Supported 00:28:41.274 Boot Partition: Not Supported 00:28:41.274 Memory Page Size Minimum: 4096 bytes 00:28:41.274 Memory Page Size Maximum: 4096 bytes 00:28:41.274 Persistent Memory Region: Not Supported 00:28:41.274 Optional Asynchronous Events Supported 00:28:41.274 Namespace Attribute Notices: Supported 00:28:41.274 Firmware Activation Notices: Not Supported 00:28:41.275 ANA Change Notices: Not Supported 00:28:41.275 PLE Aggregate Log Change Notices: Not Supported 00:28:41.275 LBA Status Info Alert Notices: Not Supported 00:28:41.275 EGE Aggregate Log Change Notices: Not Supported 00:28:41.275 Normal NVM Subsystem Shutdown event: Not Supported 00:28:41.275 Zone Descriptor Change Notices: Not Supported 00:28:41.275 Discovery Log Change Notices: Not Supported 00:28:41.275 Controller Attributes 00:28:41.275 128-bit Host Identifier: Supported 00:28:41.275 Non-Operational Permissive Mode: Not Supported 00:28:41.275 NVM Sets: Not Supported 00:28:41.275 Read Recovery Levels: Not Supported 00:28:41.275 Endurance Groups: Not Supported 00:28:41.275 Predictable Latency Mode: Not Supported 00:28:41.275 Traffic Based Keep ALive: Not Supported 00:28:41.275 Namespace Granularity: Not Supported 00:28:41.275 SQ Associations: Not Supported 00:28:41.275 UUID List: Not Supported 00:28:41.275 Multi-Domain Subsystem: Not Supported 00:28:41.275 Fixed Capacity Management: Not Supported 00:28:41.275 Variable Capacity Management: Not Supported 00:28:41.275 Delete Endurance Group: Not Supported 00:28:41.275 Delete NVM Set: Not Supported 00:28:41.275 Extended LBA Formats Supported: Not Supported 00:28:41.275 Flexible Data Placement Supported: Not Supported 00:28:41.275 00:28:41.275 Controller Memory Buffer Support 00:28:41.275 ================================ 00:28:41.275 Supported: No 00:28:41.275 00:28:41.275 Persistent Memory Region Support 00:28:41.275 ================================ 00:28:41.275 Supported: No 00:28:41.275 00:28:41.275 Admin Command Set Attributes 00:28:41.275 ============================ 00:28:41.275 Security Send/Receive: Not Supported 00:28:41.275 Format NVM: Not Supported 00:28:41.275 Firmware Activate/Download: Not Supported 00:28:41.275 Namespace Management: Not Supported 00:28:41.275 Device Self-Test: Not Supported 00:28:41.275 Directives: Not Supported 00:28:41.275 NVMe-MI: Not Supported 00:28:41.275 Virtualization Management: Not Supported 00:28:41.275 Doorbell Buffer Config: Not Supported 00:28:41.275 Get LBA Status Capability: Not Supported 00:28:41.275 Command & Feature Lockdown Capability: Not Supported 00:28:41.275 Abort Command Limit: 4 00:28:41.275 Async Event Request Limit: 4 00:28:41.275 Number of Firmware Slots: N/A 00:28:41.275 Firmware Slot 1 Read-Only: N/A 00:28:41.275 Firmware Activation Without Reset: N/A 00:28:41.275 Multiple Update Detection Support: N/A 00:28:41.275 Firmware Update Granularity: No Information Provided 00:28:41.275 Per-Namespace SMART Log: No 00:28:41.275 Asymmetric Namespace Access Log Page: Not Supported 00:28:41.275 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:41.275 Command Effects Log Page: Supported 00:28:41.275 Get Log Page Extended Data: Supported 00:28:41.275 Telemetry Log Pages: Not Supported 00:28:41.275 Persistent Event Log Pages: Not Supported 00:28:41.275 Supported Log Pages Log Page: May Support 00:28:41.275 Commands Supported & Effects Log Page: Not Supported 00:28:41.275 Feature Identifiers & Effects Log Page:May Support 00:28:41.275 NVMe-MI Commands & Effects Log Page: May Support 00:28:41.275 Data Area 4 for Telemetry Log: Not Supported 00:28:41.275 Error Log Page Entries Supported: 128 00:28:41.275 Keep Alive: Supported 00:28:41.275 Keep Alive Granularity: 10000 ms 00:28:41.275 00:28:41.275 NVM Command Set Attributes 00:28:41.275 ========================== 00:28:41.275 Submission Queue Entry Size 00:28:41.275 Max: 64 00:28:41.275 Min: 64 00:28:41.275 Completion Queue Entry Size 00:28:41.275 Max: 16 00:28:41.275 Min: 16 00:28:41.275 Number of Namespaces: 32 00:28:41.275 Compare Command: Supported 00:28:41.275 Write Uncorrectable Command: Not Supported 00:28:41.275 Dataset Management Command: Supported 00:28:41.275 Write Zeroes Command: Supported 00:28:41.275 Set Features Save Field: Not Supported 00:28:41.275 Reservations: Supported 00:28:41.275 Timestamp: Not Supported 00:28:41.275 Copy: Supported 00:28:41.275 Volatile Write Cache: Present 00:28:41.275 Atomic Write Unit (Normal): 1 00:28:41.275 Atomic Write Unit (PFail): 1 00:28:41.275 Atomic Compare & Write Unit: 1 00:28:41.275 Fused Compare & Write: Supported 00:28:41.275 Scatter-Gather List 00:28:41.275 SGL Command Set: Supported 00:28:41.275 SGL Keyed: Supported 00:28:41.275 SGL Bit Bucket Descriptor: Not Supported 00:28:41.275 SGL Metadata Pointer: Not Supported 00:28:41.275 Oversized SGL: Not Supported 00:28:41.275 SGL Metadata Address: Not Supported 00:28:41.275 SGL Offset: Supported 00:28:41.275 Transport SGL Data Block: Not Supported 00:28:41.275 Replay Protected Memory Block: Not Supported 00:28:41.275 00:28:41.275 Firmware Slot Information 00:28:41.275 ========================= 00:28:41.275 Active slot: 1 00:28:41.275 Slot 1 Firmware Revision: 24.05.1 00:28:41.275 00:28:41.275 00:28:41.275 Commands Supported and Effects 00:28:41.275 ============================== 00:28:41.275 Admin Commands 00:28:41.275 -------------- 00:28:41.275 Get Log Page (02h): Supported 00:28:41.275 Identify (06h): Supported 00:28:41.275 Abort (08h): Supported 00:28:41.275 Set Features (09h): Supported 00:28:41.275 Get Features (0Ah): Supported 00:28:41.275 Asynchronous Event Request (0Ch): Supported 00:28:41.275 Keep Alive (18h): Supported 00:28:41.276 I/O Commands 00:28:41.276 ------------ 00:28:41.276 Flush (00h): Supported LBA-Change 00:28:41.276 Write (01h): Supported LBA-Change 00:28:41.276 Read (02h): Supported 00:28:41.276 Compare (05h): Supported 00:28:41.276 Write Zeroes (08h): Supported LBA-Change 00:28:41.276 Dataset Management (09h): Supported LBA-Change 00:28:41.276 Copy (19h): Supported LBA-Change 00:28:41.276 Unknown (79h): Supported LBA-Change 00:28:41.276 Unknown (7Ah): Supported 00:28:41.276 00:28:41.276 Error Log 00:28:41.276 ========= 00:28:41.276 00:28:41.276 Arbitration 00:28:41.276 =========== 00:28:41.276 Arbitration Burst: 1 00:28:41.276 00:28:41.276 Power Management 00:28:41.276 ================ 00:28:41.276 Number of Power States: 1 00:28:41.276 Current Power State: Power State #0 00:28:41.276 Power State #0: 00:28:41.276 Max Power: 0.00 W 00:28:41.276 Non-Operational State: Operational 00:28:41.276 Entry Latency: Not Reported 00:28:41.276 Exit Latency: Not Reported 00:28:41.276 Relative Read Throughput: 0 00:28:41.276 Relative Read Latency: 0 00:28:41.276 Relative Write Throughput: 0 00:28:41.276 Relative Write Latency: 0 00:28:41.276 Idle Power: Not Reported 00:28:41.276 Active Power: Not Reported 00:28:41.276 Non-Operational Permissive Mode: Not Supported 00:28:41.276 00:28:41.276 Health Information 00:28:41.276 ================== 00:28:41.276 Critical Warnings: 00:28:41.276 Available Spare Space: OK 00:28:41.276 Temperature: OK 00:28:41.276 Device Reliability: OK 00:28:41.276 Read Only: No 00:28:41.276 Volatile Memory Backup: OK 00:28:41.276 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:41.276 Temperature Threshold: [2024-07-26 22:58:33.611397] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.276 [2024-07-26 22:58:33.611409] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x666120) 00:28:41.276 [2024-07-26 22:58:33.611421] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.276 [2024-07-26 22:58:33.611444] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bfb90, cid 7, qid 0 00:28:41.276 [2024-07-26 22:58:33.611594] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.276 [2024-07-26 22:58:33.611606] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.276 [2024-07-26 22:58:33.611613] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.276 [2024-07-26 22:58:33.611619] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bfb90) on tqpair=0x666120 00:28:41.276 [2024-07-26 22:58:33.611660] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:41.276 [2024-07-26 22:58:33.611681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.276 [2024-07-26 22:58:33.611692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.276 [2024-07-26 22:58:33.611702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.276 [2024-07-26 22:58:33.611711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.276 [2024-07-26 22:58:33.611723] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.276 [2024-07-26 22:58:33.611735] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.276 [2024-07-26 22:58:33.611742] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x666120) 00:28:41.276 [2024-07-26 22:58:33.611752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.276 [2024-07-26 22:58:33.611774] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf610, cid 3, qid 0 00:28:41.276 [2024-07-26 22:58:33.611904] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.276 [2024-07-26 22:58:33.611919] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.276 [2024-07-26 22:58:33.611926] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.276 [2024-07-26 22:58:33.611933] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf610) on tqpair=0x666120 00:28:41.276 [2024-07-26 22:58:33.611944] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.276 [2024-07-26 22:58:33.611951] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.277 [2024-07-26 22:58:33.611958] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x666120) 00:28:41.277 [2024-07-26 22:58:33.611968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-07-26 22:58:33.611993] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf610, cid 3, qid 0 00:28:41.277 [2024-07-26 22:58:33.612164] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.277 [2024-07-26 22:58:33.612180] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.277 [2024-07-26 22:58:33.612187] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.277 [2024-07-26 22:58:33.612194] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf610) on tqpair=0x666120 00:28:41.277 [2024-07-26 22:58:33.612202] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:41.277 [2024-07-26 22:58:33.612210] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:41.277 [2024-07-26 22:58:33.612227] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.277 [2024-07-26 22:58:33.612236] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.277 [2024-07-26 22:58:33.612242] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x666120) 00:28:41.277 [2024-07-26 22:58:33.612253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-07-26 22:58:33.612274] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf610, cid 3, qid 0 00:28:41.277 [2024-07-26 22:58:33.612426] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.277 [2024-07-26 22:58:33.612441] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.277 [2024-07-26 22:58:33.612448] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.277 [2024-07-26 22:58:33.612454] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf610) on tqpair=0x666120 00:28:41.277 [2024-07-26 22:58:33.612472] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.277 [2024-07-26 22:58:33.612481] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.277 [2024-07-26 22:58:33.612487] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x666120) 00:28:41.277 [2024-07-26 22:58:33.612498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-07-26 22:58:33.612517] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf610, cid 3, qid 0 00:28:41.277 [2024-07-26 22:58:33.612643] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.277 [2024-07-26 22:58:33.612658] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.277 [2024-07-26 22:58:33.612665] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.277 [2024-07-26 22:58:33.612671] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf610) on tqpair=0x666120 00:28:41.277 [2024-07-26 22:58:33.612691] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.277 [2024-07-26 22:58:33.612702] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.277 [2024-07-26 22:58:33.612708] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x666120) 00:28:41.277 [2024-07-26 22:58:33.612718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-07-26 22:58:33.612738] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf610, cid 3, qid 0 00:28:41.277 [2024-07-26 22:58:33.612865] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.277 [2024-07-26 22:58:33.612877] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.277 [2024-07-26 22:58:33.612883] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.277 [2024-07-26 22:58:33.612890] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf610) on tqpair=0x666120 00:28:41.277 [2024-07-26 22:58:33.612905] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.277 [2024-07-26 22:58:33.612914] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.277 [2024-07-26 22:58:33.612921] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x666120) 00:28:41.277 [2024-07-26 22:58:33.612931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-07-26 22:58:33.612951] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf610, cid 3, qid 0 00:28:41.277 [2024-07-26 22:58:33.613095] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.277 [2024-07-26 22:58:33.613109] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.277 [2024-07-26 22:58:33.613116] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.277 [2024-07-26 22:58:33.613123] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf610) on tqpair=0x666120 00:28:41.277 [2024-07-26 22:58:33.613139] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.277 [2024-07-26 22:58:33.613149] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.277 [2024-07-26 22:58:33.613155] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x666120) 00:28:41.277 [2024-07-26 22:58:33.613166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-07-26 22:58:33.613187] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf610, cid 3, qid 0 00:28:41.277 [2024-07-26 22:58:33.613320] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.277 [2024-07-26 22:58:33.613335] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.277 [2024-07-26 22:58:33.613342] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.277 [2024-07-26 22:58:33.613364] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf610) on tqpair=0x666120 00:28:41.277 [2024-07-26 22:58:33.613381] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.277 [2024-07-26 22:58:33.613390] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.277 [2024-07-26 22:58:33.613396] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x666120) 00:28:41.277 [2024-07-26 22:58:33.613407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-07-26 22:58:33.613427] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf610, cid 3, qid 0 00:28:41.277 [2024-07-26 22:58:33.613556] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.277 [2024-07-26 22:58:33.613571] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.277 [2024-07-26 22:58:33.613578] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.277 [2024-07-26 22:58:33.613584] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf610) on tqpair=0x666120 00:28:41.277 [2024-07-26 22:58:33.613604] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.277 [2024-07-26 22:58:33.613614] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.277 [2024-07-26 22:58:33.613620] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x666120) 00:28:41.277 [2024-07-26 22:58:33.613630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.277 [2024-07-26 22:58:33.613650] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf610, cid 3, qid 0 00:28:41.277 [2024-07-26 22:58:33.613801] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.277 [2024-07-26 22:58:33.613816] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.277 [2024-07-26 22:58:33.613823] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.278 [2024-07-26 22:58:33.613830] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf610) on tqpair=0x666120 00:28:41.278 [2024-07-26 22:58:33.613847] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.278 [2024-07-26 22:58:33.613856] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.278 [2024-07-26 22:58:33.613863] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x666120) 00:28:41.278 [2024-07-26 22:58:33.613874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-07-26 22:58:33.613894] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf610, cid 3, qid 0 00:28:41.278 [2024-07-26 22:58:33.614036] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.278 [2024-07-26 22:58:33.614049] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.278 [2024-07-26 22:58:33.614077] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.278 [2024-07-26 22:58:33.614085] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf610) on tqpair=0x666120 00:28:41.278 [2024-07-26 22:58:33.614102] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.278 [2024-07-26 22:58:33.614112] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.278 [2024-07-26 22:58:33.614119] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x666120) 00:28:41.278 [2024-07-26 22:58:33.614129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-07-26 22:58:33.614150] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf610, cid 3, qid 0 00:28:41.278 [2024-07-26 22:58:33.614280] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.278 [2024-07-26 22:58:33.614293] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.278 [2024-07-26 22:58:33.614299] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.278 [2024-07-26 22:58:33.614306] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf610) on tqpair=0x666120 00:28:41.278 [2024-07-26 22:58:33.614322] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.278 [2024-07-26 22:58:33.614332] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.278 [2024-07-26 22:58:33.614339] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x666120) 00:28:41.278 [2024-07-26 22:58:33.614349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-07-26 22:58:33.614384] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf610, cid 3, qid 0 00:28:41.278 [2024-07-26 22:58:33.614524] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.278 [2024-07-26 22:58:33.614539] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.278 [2024-07-26 22:58:33.614546] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.278 [2024-07-26 22:58:33.614552] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf610) on tqpair=0x666120 00:28:41.278 [2024-07-26 22:58:33.614568] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.278 [2024-07-26 22:58:33.614581] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.278 [2024-07-26 22:58:33.614588] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x666120) 00:28:41.278 [2024-07-26 22:58:33.614598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-07-26 22:58:33.614618] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf610, cid 3, qid 0 00:28:41.278 [2024-07-26 22:58:33.614743] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.278 [2024-07-26 22:58:33.614755] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.278 [2024-07-26 22:58:33.614762] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.278 [2024-07-26 22:58:33.614769] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf610) on tqpair=0x666120 00:28:41.278 [2024-07-26 22:58:33.614784] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.278 [2024-07-26 22:58:33.614793] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.278 [2024-07-26 22:58:33.614800] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x666120) 00:28:41.278 [2024-07-26 22:58:33.614810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-07-26 22:58:33.614829] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf610, cid 3, qid 0 00:28:41.278 [2024-07-26 22:58:33.614951] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.278 [2024-07-26 22:58:33.614963] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.278 [2024-07-26 22:58:33.614970] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.278 [2024-07-26 22:58:33.614976] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf610) on tqpair=0x666120 00:28:41.278 [2024-07-26 22:58:33.614992] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.278 [2024-07-26 22:58:33.615001] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.278 [2024-07-26 22:58:33.615007] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x666120) 00:28:41.278 [2024-07-26 22:58:33.615017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-07-26 22:58:33.615037] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf610, cid 3, qid 0 00:28:41.278 [2024-07-26 22:58:33.619078] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.278 [2024-07-26 22:58:33.619092] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.278 [2024-07-26 22:58:33.619099] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.278 [2024-07-26 22:58:33.619106] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf610) on tqpair=0x666120 00:28:41.278 [2024-07-26 22:58:33.619124] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:41.278 [2024-07-26 22:58:33.619134] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:41.278 [2024-07-26 22:58:33.619140] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x666120) 00:28:41.278 [2024-07-26 22:58:33.619151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.278 [2024-07-26 22:58:33.619173] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf610, cid 3, qid 0 00:28:41.278 [2024-07-26 22:58:33.619316] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:41.278 [2024-07-26 22:58:33.619332] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:41.278 [2024-07-26 22:58:33.619339] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:41.278 [2024-07-26 22:58:33.619345] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf610) on tqpair=0x666120 00:28:41.278 [2024-07-26 22:58:33.619374] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:28:41.278 0 Kelvin (-273 Celsius) 00:28:41.278 Available Spare: 0% 00:28:41.278 Available Spare Threshold: 0% 00:28:41.278 Life Percentage Used: 0% 00:28:41.278 Data Units Read: 0 00:28:41.278 Data Units Written: 0 00:28:41.278 Host Read Commands: 0 00:28:41.278 Host Write Commands: 0 00:28:41.278 Controller Busy Time: 0 minutes 00:28:41.278 Power Cycles: 0 00:28:41.278 Power On Hours: 0 hours 00:28:41.278 Unsafe Shutdowns: 0 00:28:41.278 Unrecoverable Media Errors: 0 00:28:41.278 Lifetime Error Log Entries: 0 00:28:41.278 Warning Temperature Time: 0 minutes 00:28:41.279 Critical Temperature Time: 0 minutes 00:28:41.279 00:28:41.279 Number of Queues 00:28:41.279 ================ 00:28:41.279 Number of I/O Submission Queues: 127 00:28:41.279 Number of I/O Completion Queues: 127 00:28:41.279 00:28:41.279 Active Namespaces 00:28:41.279 ================= 00:28:41.279 Namespace ID:1 00:28:41.279 Error Recovery Timeout: Unlimited 00:28:41.279 Command Set Identifier: NVM (00h) 00:28:41.279 Deallocate: Supported 00:28:41.279 Deallocated/Unwritten Error: Not Supported 00:28:41.279 Deallocated Read Value: Unknown 00:28:41.279 Deallocate in Write Zeroes: Not Supported 00:28:41.279 Deallocated Guard Field: 0xFFFF 00:28:41.279 Flush: Supported 00:28:41.279 Reservation: Supported 00:28:41.279 Namespace Sharing Capabilities: Multiple Controllers 00:28:41.279 Size (in LBAs): 131072 (0GiB) 00:28:41.279 Capacity (in LBAs): 131072 (0GiB) 00:28:41.279 Utilization (in LBAs): 131072 (0GiB) 00:28:41.279 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:41.279 EUI64: ABCDEF0123456789 00:28:41.279 UUID: df1a898d-18ad-4c93-b37f-442cea631552 00:28:41.279 Thin Provisioning: Not Supported 00:28:41.279 Per-NS Atomic Units: Yes 00:28:41.279 Atomic Boundary Size (Normal): 0 00:28:41.279 Atomic Boundary Size (PFail): 0 00:28:41.279 Atomic Boundary Offset: 0 00:28:41.279 Maximum Single Source Range Length: 65535 00:28:41.279 Maximum Copy Length: 65535 00:28:41.279 Maximum Source Range Count: 1 00:28:41.279 NGUID/EUI64 Never Reused: No 00:28:41.279 Namespace Write Protected: No 00:28:41.279 Number of LBA Formats: 1 00:28:41.279 Current LBA Format: LBA Format #00 00:28:41.279 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:41.279 00:28:41.279 22:58:33 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:41.279 22:58:33 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:41.279 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.279 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:41.279 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.279 22:58:33 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:41.279 22:58:33 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:41.279 22:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:41.279 22:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:28:41.279 22:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:41.279 22:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:28:41.279 22:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:41.279 22:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:41.279 rmmod nvme_tcp 00:28:41.279 rmmod nvme_fabrics 00:28:41.279 rmmod nvme_keyring 00:28:41.279 22:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:41.279 22:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:28:41.279 22:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:28:41.279 22:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3637061 ']' 00:28:41.279 22:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3637061 00:28:41.279 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 3637061 ']' 00:28:41.279 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 3637061 00:28:41.279 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:28:41.279 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:41.279 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3637061 00:28:41.279 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:41.279 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:41.279 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3637061' 00:28:41.279 killing process with pid 3637061 00:28:41.279 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 3637061 00:28:41.279 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 3637061 00:28:41.539 22:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:41.539 22:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:41.539 22:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:41.539 22:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:41.539 22:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:41.539 22:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.539 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:41.539 22:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.076 22:58:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:44.076 00:28:44.076 real 0m5.419s 00:28:44.076 user 0m4.565s 00:28:44.076 sys 0m1.839s 00:28:44.076 22:58:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:44.076 22:58:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:44.076 ************************************ 00:28:44.076 END TEST nvmf_identify 00:28:44.076 ************************************ 00:28:44.076 22:58:36 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:44.076 22:58:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:44.076 22:58:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:44.076 22:58:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:44.076 ************************************ 00:28:44.076 START TEST nvmf_perf 00:28:44.076 ************************************ 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:44.076 * Looking for test storage... 00:28:44.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:44.076 22:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:44.077 22:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:44.077 22:58:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:45.981 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:45.981 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:45.981 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:45.981 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:45.982 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:45.982 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:45.982 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:45.982 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:45.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:45.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:28:45.982 00:28:45.982 --- 10.0.0.2 ping statistics --- 00:28:45.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.982 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:45.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:45.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:28:45.982 00:28:45.982 --- 10.0.0.1 ping statistics --- 00:28:45.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.982 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3639137 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3639137 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 3639137 ']' 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:45.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:45.982 22:58:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:45.982 [2024-07-26 22:58:38.226897] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:28:45.982 [2024-07-26 22:58:38.226963] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:45.982 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.983 [2024-07-26 22:58:38.292858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:45.983 [2024-07-26 22:58:38.384148] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:45.983 [2024-07-26 22:58:38.384212] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:45.983 [2024-07-26 22:58:38.384229] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:45.983 [2024-07-26 22:58:38.384243] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:45.983 [2024-07-26 22:58:38.384256] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:45.983 [2024-07-26 22:58:38.384324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.983 [2024-07-26 22:58:38.384379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:45.983 [2024-07-26 22:58:38.384498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:45.983 [2024-07-26 22:58:38.384500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.242 22:58:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:46.242 22:58:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:28:46.242 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:46.242 22:58:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:46.242 22:58:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:46.242 22:58:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:46.242 22:58:38 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:46.242 22:58:38 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:49.527 22:58:41 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:49.527 22:58:41 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:49.527 22:58:41 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:28:49.527 22:58:41 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:49.785 22:58:42 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:49.785 22:58:42 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:28:49.785 22:58:42 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:49.785 22:58:42 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:49.785 22:58:42 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:50.042 [2024-07-26 22:58:42.425456] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:50.042 22:58:42 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:50.300 22:58:42 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:50.300 22:58:42 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:50.557 22:58:42 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:50.557 22:58:42 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:50.815 22:58:43 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:51.073 [2024-07-26 22:58:43.387771] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:51.073 22:58:43 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:51.330 22:58:43 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:28:51.330 22:58:43 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:51.330 22:58:43 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:51.330 22:58:43 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:52.704 Initializing NVMe Controllers 00:28:52.704 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:28:52.704 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:28:52.704 Initialization complete. Launching workers. 00:28:52.704 ======================================================== 00:28:52.704 Latency(us) 00:28:52.704 Device Information : IOPS MiB/s Average min max 00:28:52.704 PCIE (0000:88:00.0) NSID 1 from core 0: 84534.48 330.21 378.15 28.59 6243.57 00:28:52.704 ======================================================== 00:28:52.704 Total : 84534.48 330.21 378.15 28.59 6243.57 00:28:52.704 00:28:52.704 22:58:44 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:52.704 EAL: No free 2048 kB hugepages reported on node 1 00:28:54.080 Initializing NVMe Controllers 00:28:54.080 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:54.080 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:54.080 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:54.080 Initialization complete. Launching workers. 00:28:54.080 ======================================================== 00:28:54.080 Latency(us) 00:28:54.080 Device Information : IOPS MiB/s Average min max 00:28:54.080 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 131.00 0.51 7928.01 226.49 46054.46 00:28:54.080 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 71.00 0.28 14147.30 5118.65 47891.14 00:28:54.080 ======================================================== 00:28:54.080 Total : 202.00 0.79 10114.00 226.49 47891.14 00:28:54.080 00:28:54.080 22:58:46 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:54.080 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.457 Initializing NVMe Controllers 00:28:55.457 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:55.457 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:55.457 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:55.457 Initialization complete. Launching workers. 00:28:55.457 ======================================================== 00:28:55.457 Latency(us) 00:28:55.457 Device Information : IOPS MiB/s Average min max 00:28:55.457 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7974.40 31.15 4012.88 589.75 7790.45 00:28:55.457 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3883.27 15.17 8253.85 4278.69 16086.24 00:28:55.457 ======================================================== 00:28:55.457 Total : 11857.68 46.32 5401.75 589.75 16086.24 00:28:55.457 00:28:55.457 22:58:47 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:55.457 22:58:47 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:55.457 22:58:47 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:55.457 EAL: No free 2048 kB hugepages reported on node 1 00:28:57.993 Initializing NVMe Controllers 00:28:57.993 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:57.993 Controller IO queue size 128, less than required. 00:28:57.993 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:57.993 Controller IO queue size 128, less than required. 00:28:57.994 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:57.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:57.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:57.994 Initialization complete. Launching workers. 00:28:57.994 ======================================================== 00:28:57.994 Latency(us) 00:28:57.994 Device Information : IOPS MiB/s Average min max 00:28:57.994 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 838.95 209.74 159090.76 90876.89 216653.44 00:28:57.994 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 561.96 140.49 238330.39 94377.41 356746.53 00:28:57.994 ======================================================== 00:28:57.994 Total : 1400.91 350.23 190876.94 90876.89 356746.53 00:28:57.994 00:28:57.994 22:58:50 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:57.994 EAL: No free 2048 kB hugepages reported on node 1 00:28:57.994 No valid NVMe controllers or AIO or URING devices found 00:28:57.994 Initializing NVMe Controllers 00:28:57.994 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:57.994 Controller IO queue size 128, less than required. 00:28:57.994 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:57.994 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:57.994 Controller IO queue size 128, less than required. 00:28:57.994 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:57.994 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:57.994 WARNING: Some requested NVMe devices were skipped 00:28:57.994 22:58:50 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:57.994 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.529 Initializing NVMe Controllers 00:29:00.529 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:00.529 Controller IO queue size 128, less than required. 00:29:00.529 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:00.529 Controller IO queue size 128, less than required. 00:29:00.529 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:00.529 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:00.529 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:00.529 Initialization complete. Launching workers. 00:29:00.529 00:29:00.529 ==================== 00:29:00.529 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:29:00.529 TCP transport: 00:29:00.529 polls: 23186 00:29:00.529 idle_polls: 7931 00:29:00.529 sock_completions: 15255 00:29:00.529 nvme_completions: 3613 00:29:00.529 submitted_requests: 5402 00:29:00.529 queued_requests: 1 00:29:00.529 00:29:00.529 ==================== 00:29:00.529 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:29:00.529 TCP transport: 00:29:00.529 polls: 25451 00:29:00.529 idle_polls: 15003 00:29:00.529 sock_completions: 10448 00:29:00.529 nvme_completions: 3889 00:29:00.529 submitted_requests: 5860 00:29:00.529 queued_requests: 1 00:29:00.529 ======================================================== 00:29:00.529 Latency(us) 00:29:00.529 Device Information : IOPS MiB/s Average min max 00:29:00.529 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 901.48 225.37 146674.22 92706.86 218341.72 00:29:00.529 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 970.37 242.59 132437.43 66782.77 198159.02 00:29:00.529 ======================================================== 00:29:00.529 Total : 1871.85 467.96 139293.87 66782.77 218341.72 00:29:00.529 00:29:00.529 22:58:52 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:29:00.529 22:58:52 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:00.787 22:58:53 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:29:00.787 22:58:53 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:29:00.787 22:58:53 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:29:04.102 22:58:56 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=6618a0fc-4019-4bb8-9763-f40477f9e817 00:29:04.102 22:58:56 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 6618a0fc-4019-4bb8-9763-f40477f9e817 00:29:04.102 22:58:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=6618a0fc-4019-4bb8-9763-f40477f9e817 00:29:04.102 22:58:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:29:04.102 22:58:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:29:04.102 22:58:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:29:04.102 22:58:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:04.360 22:58:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:29:04.360 { 00:29:04.360 "uuid": "6618a0fc-4019-4bb8-9763-f40477f9e817", 00:29:04.360 "name": "lvs_0", 00:29:04.360 "base_bdev": "Nvme0n1", 00:29:04.360 "total_data_clusters": 238234, 00:29:04.360 "free_clusters": 238234, 00:29:04.360 "block_size": 512, 00:29:04.360 "cluster_size": 4194304 00:29:04.360 } 00:29:04.360 ]' 00:29:04.360 22:58:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="6618a0fc-4019-4bb8-9763-f40477f9e817") .free_clusters' 00:29:04.360 22:58:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=238234 00:29:04.360 22:58:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="6618a0fc-4019-4bb8-9763-f40477f9e817") .cluster_size' 00:29:04.360 22:58:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:29:04.360 22:58:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=952936 00:29:04.360 22:58:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 952936 00:29:04.360 952936 00:29:04.360 22:58:56 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:29:04.360 22:58:56 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:29:04.360 22:58:56 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6618a0fc-4019-4bb8-9763-f40477f9e817 lbd_0 20480 00:29:04.926 22:58:57 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=2ff48a8c-cef3-4493-85a6-4dac62f6958b 00:29:04.926 22:58:57 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 2ff48a8c-cef3-4493-85a6-4dac62f6958b lvs_n_0 00:29:05.860 22:58:58 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=d317a30b-6a62-4caa-85c0-efde53c7bb17 00:29:05.860 22:58:58 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb d317a30b-6a62-4caa-85c0-efde53c7bb17 00:29:05.860 22:58:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=d317a30b-6a62-4caa-85c0-efde53c7bb17 00:29:05.860 22:58:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:29:05.860 22:58:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:29:05.860 22:58:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:29:05.860 22:58:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:06.118 22:58:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:29:06.118 { 00:29:06.119 "uuid": "6618a0fc-4019-4bb8-9763-f40477f9e817", 00:29:06.119 "name": "lvs_0", 00:29:06.119 "base_bdev": "Nvme0n1", 00:29:06.119 "total_data_clusters": 238234, 00:29:06.119 "free_clusters": 233114, 00:29:06.119 "block_size": 512, 00:29:06.119 "cluster_size": 4194304 00:29:06.119 }, 00:29:06.119 { 00:29:06.119 "uuid": "d317a30b-6a62-4caa-85c0-efde53c7bb17", 00:29:06.119 "name": "lvs_n_0", 00:29:06.119 "base_bdev": "2ff48a8c-cef3-4493-85a6-4dac62f6958b", 00:29:06.119 "total_data_clusters": 5114, 00:29:06.119 "free_clusters": 5114, 00:29:06.119 "block_size": 512, 00:29:06.119 "cluster_size": 4194304 00:29:06.119 } 00:29:06.119 ]' 00:29:06.119 22:58:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="d317a30b-6a62-4caa-85c0-efde53c7bb17") .free_clusters' 00:29:06.119 22:58:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=5114 00:29:06.119 22:58:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="d317a30b-6a62-4caa-85c0-efde53c7bb17") .cluster_size' 00:29:06.119 22:58:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:29:06.119 22:58:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=20456 00:29:06.119 22:58:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 20456 00:29:06.119 20456 00:29:06.119 22:58:58 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:29:06.119 22:58:58 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d317a30b-6a62-4caa-85c0-efde53c7bb17 lbd_nest_0 20456 00:29:06.376 22:58:58 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=b2204eb6-b89d-4218-84e5-e404b9f3ba31 00:29:06.376 22:58:58 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:06.633 22:58:58 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:29:06.633 22:58:58 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 b2204eb6-b89d-4218-84e5-e404b9f3ba31 00:29:06.891 22:58:59 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:07.149 22:58:59 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:29:07.149 22:58:59 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:29:07.149 22:58:59 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:07.149 22:58:59 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:07.149 22:58:59 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:07.149 EAL: No free 2048 kB hugepages reported on node 1 00:29:19.351 Initializing NVMe Controllers 00:29:19.351 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:19.351 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:19.351 Initialization complete. Launching workers. 00:29:19.351 ======================================================== 00:29:19.351 Latency(us) 00:29:19.351 Device Information : IOPS MiB/s Average min max 00:29:19.351 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 44.58 0.02 22433.76 229.77 46460.83 00:29:19.351 ======================================================== 00:29:19.351 Total : 44.58 0.02 22433.76 229.77 46460.83 00:29:19.351 00:29:19.351 22:59:09 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:19.351 22:59:09 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:19.351 EAL: No free 2048 kB hugepages reported on node 1 00:29:29.331 Initializing NVMe Controllers 00:29:29.331 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:29.331 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:29.331 Initialization complete. Launching workers. 00:29:29.331 ======================================================== 00:29:29.331 Latency(us) 00:29:29.331 Device Information : IOPS MiB/s Average min max 00:29:29.331 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 78.40 9.80 12764.03 3986.29 50887.06 00:29:29.331 ======================================================== 00:29:29.331 Total : 78.40 9.80 12764.03 3986.29 50887.06 00:29:29.331 00:29:29.331 22:59:20 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:29.331 22:59:20 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:29.331 22:59:20 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:29.331 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.306 Initializing NVMe Controllers 00:29:39.306 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:39.306 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:39.306 Initialization complete. Launching workers. 00:29:39.306 ======================================================== 00:29:39.306 Latency(us) 00:29:39.306 Device Information : IOPS MiB/s Average min max 00:29:39.306 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6943.51 3.39 4607.98 313.77 12062.78 00:29:39.306 ======================================================== 00:29:39.306 Total : 6943.51 3.39 4607.98 313.77 12062.78 00:29:39.306 00:29:39.306 22:59:30 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:39.306 22:59:30 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:39.306 EAL: No free 2048 kB hugepages reported on node 1 00:29:49.329 Initializing NVMe Controllers 00:29:49.329 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:49.329 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:49.329 Initialization complete. Launching workers. 00:29:49.329 ======================================================== 00:29:49.329 Latency(us) 00:29:49.329 Device Information : IOPS MiB/s Average min max 00:29:49.329 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1598.80 199.85 20026.63 2111.87 44336.71 00:29:49.329 ======================================================== 00:29:49.329 Total : 1598.80 199.85 20026.63 2111.87 44336.71 00:29:49.329 00:29:49.329 22:59:41 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:49.329 22:59:41 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:49.329 22:59:41 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:49.329 EAL: No free 2048 kB hugepages reported on node 1 00:29:59.309 Initializing NVMe Controllers 00:29:59.309 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:59.309 Controller IO queue size 128, less than required. 00:29:59.309 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:59.309 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:59.309 Initialization complete. Launching workers. 00:29:59.309 ======================================================== 00:29:59.309 Latency(us) 00:29:59.309 Device Information : IOPS MiB/s Average min max 00:29:59.309 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10355.92 5.06 12365.93 1759.17 25335.71 00:29:59.309 ======================================================== 00:29:59.309 Total : 10355.92 5.06 12365.93 1759.17 25335.71 00:29:59.309 00:29:59.309 22:59:51 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:59.309 22:59:51 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:59.309 EAL: No free 2048 kB hugepages reported on node 1 00:30:11.515 Initializing NVMe Controllers 00:30:11.515 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:11.515 Controller IO queue size 128, less than required. 00:30:11.515 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:11.515 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:11.515 Initialization complete. Launching workers. 00:30:11.515 ======================================================== 00:30:11.515 Latency(us) 00:30:11.515 Device Information : IOPS MiB/s Average min max 00:30:11.515 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1206.68 150.84 106437.45 23973.00 194982.18 00:30:11.515 ======================================================== 00:30:11.515 Total : 1206.68 150.84 106437.45 23973.00 194982.18 00:30:11.515 00:30:11.515 23:00:01 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:11.515 23:00:02 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b2204eb6-b89d-4218-84e5-e404b9f3ba31 00:30:11.515 23:00:02 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:11.515 23:00:03 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2ff48a8c-cef3-4493-85a6-4dac62f6958b 00:30:11.515 23:00:03 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:11.515 23:00:03 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:30:11.515 23:00:03 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:30:11.515 23:00:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:11.515 23:00:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:30:11.515 23:00:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:11.515 23:00:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:30:11.515 23:00:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:11.515 23:00:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:11.515 rmmod nvme_tcp 00:30:11.515 rmmod nvme_fabrics 00:30:11.515 rmmod nvme_keyring 00:30:11.515 23:00:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:11.515 23:00:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:30:11.515 23:00:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:30:11.515 23:00:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3639137 ']' 00:30:11.515 23:00:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3639137 00:30:11.515 23:00:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 3639137 ']' 00:30:11.515 23:00:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 3639137 00:30:11.515 23:00:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:30:11.515 23:00:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:11.515 23:00:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3639137 00:30:11.515 23:00:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:11.515 23:00:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:11.515 23:00:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3639137' 00:30:11.515 killing process with pid 3639137 00:30:11.515 23:00:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 3639137 00:30:11.515 23:00:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 3639137 00:30:13.413 23:00:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:13.413 23:00:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:13.413 23:00:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:13.413 23:00:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:13.413 23:00:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:13.413 23:00:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.413 23:00:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:13.413 23:00:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.313 23:00:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:15.313 00:30:15.313 real 1m31.361s 00:30:15.313 user 5m35.577s 00:30:15.313 sys 0m16.360s 00:30:15.313 23:00:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:15.313 23:00:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:15.313 ************************************ 00:30:15.313 END TEST nvmf_perf 00:30:15.313 ************************************ 00:30:15.313 23:00:07 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:15.313 23:00:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:15.313 23:00:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:15.313 23:00:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:15.313 ************************************ 00:30:15.313 START TEST nvmf_fio_host 00:30:15.313 ************************************ 00:30:15.313 23:00:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:15.313 * Looking for test storage... 00:30:15.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:15.313 23:00:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:15.313 23:00:07 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:15.313 23:00:07 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:15.313 23:00:07 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:15.313 23:00:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.313 23:00:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.313 23:00:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.313 23:00:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:15.313 23:00:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.313 23:00:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:15.313 23:00:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:30:15.313 23:00:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:15.313 23:00:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:15.313 23:00:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:15.313 23:00:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:15.313 23:00:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:15.313 23:00:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:15.313 23:00:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:15.313 23:00:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:15.313 23:00:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:15.313 23:00:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:15.313 23:00:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:15.313 23:00:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:15.313 23:00:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:15.313 23:00:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:15.313 23:00:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:15.313 23:00:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:15.313 23:00:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:15.313 23:00:07 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:15.313 23:00:07 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:15.313 23:00:07 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:15.313 23:00:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.313 23:00:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.314 23:00:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.314 23:00:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:15.314 23:00:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.314 23:00:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:30:15.314 23:00:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:15.314 23:00:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:15.314 23:00:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:15.314 23:00:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:15.314 23:00:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:15.314 23:00:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:15.314 23:00:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:15.314 23:00:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:15.314 23:00:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:15.314 23:00:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:30:15.314 23:00:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:15.314 23:00:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:15.314 23:00:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:15.314 23:00:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:15.314 23:00:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:15.314 23:00:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:15.314 23:00:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:15.314 23:00:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.314 23:00:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:15.314 23:00:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:15.314 23:00:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:30:15.314 23:00:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.214 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:17.214 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:30:17.214 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:17.214 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:17.214 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:17.214 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:17.214 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:17.214 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:30:17.214 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:17.214 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:30:17.214 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:30:17.214 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:30:17.214 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:30:17.214 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:30:17.214 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:30:17.214 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:17.214 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:17.214 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:17.214 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:17.214 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:17.214 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:17.214 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:17.214 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:17.214 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:17.214 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:17.214 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:17.214 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:17.214 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:17.214 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:17.214 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:17.214 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:17.214 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:17.214 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:17.214 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:17.214 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:17.214 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:17.215 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:17.215 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:17.215 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:17.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:17.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:30:17.215 00:30:17.215 --- 10.0.0.2 ping statistics --- 00:30:17.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.215 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:17.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:17.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:30:17.215 00:30:17.215 --- 10.0.0.1 ping statistics --- 00:30:17.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.215 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3651725 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3651725 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 3651725 ']' 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:17.215 23:00:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:17.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:17.216 23:00:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:17.216 23:00:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.216 [2024-07-26 23:00:09.690958] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:30:17.216 [2024-07-26 23:00:09.691043] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:17.473 EAL: No free 2048 kB hugepages reported on node 1 00:30:17.473 [2024-07-26 23:00:09.758764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:17.473 [2024-07-26 23:00:09.853933] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:17.473 [2024-07-26 23:00:09.853988] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:17.473 [2024-07-26 23:00:09.854014] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:17.473 [2024-07-26 23:00:09.854027] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:17.473 [2024-07-26 23:00:09.854039] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:17.473 [2024-07-26 23:00:09.854114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:17.473 [2024-07-26 23:00:09.854142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:17.473 [2024-07-26 23:00:09.854165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:17.473 [2024-07-26 23:00:09.854170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:17.730 23:00:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:17.730 23:00:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:30:17.730 23:00:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:17.730 [2024-07-26 23:00:10.212395] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:17.986 23:00:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:30:17.986 23:00:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:17.986 23:00:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.986 23:00:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:30:18.242 Malloc1 00:30:18.242 23:00:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:18.499 23:00:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:18.756 23:00:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:19.013 [2024-07-26 23:00:11.268105] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:19.013 23:00:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:19.271 23:00:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:19.271 23:00:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:19.271 23:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:19.271 23:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:19.271 23:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:19.271 23:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:19.271 23:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:19.271 23:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:19.272 23:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:19.272 23:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:19.272 23:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:19.272 23:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:19.272 23:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:19.272 23:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:19.272 23:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:19.272 23:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:19.272 23:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:19.272 23:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:19.272 23:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:19.272 23:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:19.272 23:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:19.272 23:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:19.272 23:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:19.272 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:19.272 fio-3.35 00:30:19.272 Starting 1 thread 00:30:19.272 EAL: No free 2048 kB hugepages reported on node 1 00:30:21.797 00:30:21.797 test: (groupid=0, jobs=1): err= 0: pid=3652080: Fri Jul 26 23:00:14 2024 00:30:21.797 read: IOPS=9198, BW=35.9MiB/s (37.7MB/s)(72.1MiB/2006msec) 00:30:21.797 slat (nsec): min=1986, max=153482, avg=2542.27, stdev=1725.86 00:30:21.797 clat (usec): min=3338, max=13670, avg=7677.45, stdev=561.08 00:30:21.797 lat (usec): min=3374, max=13672, avg=7679.99, stdev=560.99 00:30:21.797 clat percentiles (usec): 00:30:21.797 | 1.00th=[ 6456], 5.00th=[ 6783], 10.00th=[ 6980], 20.00th=[ 7242], 00:30:21.797 | 30.00th=[ 7439], 40.00th=[ 7570], 50.00th=[ 7701], 60.00th=[ 7767], 00:30:21.797 | 70.00th=[ 7963], 80.00th=[ 8094], 90.00th=[ 8291], 95.00th=[ 8455], 00:30:21.797 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[11600], 99.95th=[13304], 00:30:21.797 | 99.99th=[13698] 00:30:21.797 bw ( KiB/s): min=35640, max=37472, per=99.88%, avg=36752.00, stdev=787.12, samples=4 00:30:21.797 iops : min= 8910, max= 9368, avg=9188.00, stdev=196.78, samples=4 00:30:21.797 write: IOPS=9202, BW=35.9MiB/s (37.7MB/s)(72.1MiB/2006msec); 0 zone resets 00:30:21.797 slat (usec): min=2, max=138, avg= 2.66, stdev= 1.37 00:30:21.797 clat (usec): min=1318, max=11725, avg=6150.82, stdev=496.31 00:30:21.797 lat (usec): min=1325, max=11727, avg=6153.48, stdev=496.27 00:30:21.797 clat percentiles (usec): 00:30:21.798 | 1.00th=[ 5014], 5.00th=[ 5407], 10.00th=[ 5604], 20.00th=[ 5800], 00:30:21.798 | 30.00th=[ 5932], 40.00th=[ 6063], 50.00th=[ 6128], 60.00th=[ 6259], 00:30:21.798 | 70.00th=[ 6390], 80.00th=[ 6521], 90.00th=[ 6718], 95.00th=[ 6915], 00:30:21.798 | 99.00th=[ 7177], 99.50th=[ 7308], 99.90th=[10290], 99.95th=[10814], 00:30:21.798 | 99.99th=[11731] 00:30:21.798 bw ( KiB/s): min=36408, max=37040, per=100.00%, avg=36822.00, stdev=282.42, samples=4 00:30:21.798 iops : min= 9102, max= 9260, avg=9205.50, stdev=70.60, samples=4 00:30:21.798 lat (msec) : 2=0.01%, 4=0.08%, 10=99.76%, 20=0.15% 00:30:21.798 cpu : usr=56.76%, sys=36.26%, ctx=59, majf=0, minf=31 00:30:21.798 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:21.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:21.798 issued rwts: total=18453,18461,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:21.798 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:21.798 00:30:21.798 Run status group 0 (all jobs): 00:30:21.798 READ: bw=35.9MiB/s (37.7MB/s), 35.9MiB/s-35.9MiB/s (37.7MB/s-37.7MB/s), io=72.1MiB (75.6MB), run=2006-2006msec 00:30:21.798 WRITE: bw=35.9MiB/s (37.7MB/s), 35.9MiB/s-35.9MiB/s (37.7MB/s-37.7MB/s), io=72.1MiB (75.6MB), run=2006-2006msec 00:30:21.798 23:00:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:21.798 23:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:21.798 23:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:21.798 23:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:21.798 23:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:21.798 23:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:21.798 23:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:21.798 23:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:21.798 23:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:21.798 23:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:21.798 23:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:21.798 23:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:21.798 23:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:21.798 23:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:21.798 23:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:21.798 23:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:21.798 23:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:21.798 23:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:21.798 23:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:21.798 23:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:21.798 23:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:21.798 23:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:22.056 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:22.056 fio-3.35 00:30:22.056 Starting 1 thread 00:30:22.056 EAL: No free 2048 kB hugepages reported on node 1 00:30:24.621 00:30:24.621 test: (groupid=0, jobs=1): err= 0: pid=3652533: Fri Jul 26 23:00:16 2024 00:30:24.621 read: IOPS=6363, BW=99.4MiB/s (104MB/s)(199MiB/2005msec) 00:30:24.621 slat (nsec): min=3009, max=93453, avg=3761.88, stdev=1740.10 00:30:24.621 clat (usec): min=3409, max=23349, avg=11750.78, stdev=2720.41 00:30:24.621 lat (usec): min=3412, max=23353, avg=11754.54, stdev=2720.48 00:30:24.621 clat percentiles (usec): 00:30:24.621 | 1.00th=[ 5932], 5.00th=[ 7570], 10.00th=[ 8455], 20.00th=[ 9372], 00:30:24.621 | 30.00th=[10159], 40.00th=[10814], 50.00th=[11600], 60.00th=[12387], 00:30:24.621 | 70.00th=[13173], 80.00th=[14091], 90.00th=[15139], 95.00th=[16188], 00:30:24.621 | 99.00th=[19006], 99.50th=[20317], 99.90th=[21890], 99.95th=[23200], 00:30:24.621 | 99.99th=[23200] 00:30:24.621 bw ( KiB/s): min=43360, max=62144, per=49.83%, avg=50728.00, stdev=8050.08, samples=4 00:30:24.621 iops : min= 2710, max= 3884, avg=3170.50, stdev=503.13, samples=4 00:30:24.621 write: IOPS=3627, BW=56.7MiB/s (59.4MB/s)(104MiB/1831msec); 0 zone resets 00:30:24.621 slat (usec): min=30, max=191, avg=34.06, stdev= 5.79 00:30:24.621 clat (usec): min=7320, max=30107, avg=15061.68, stdev=3467.57 00:30:24.621 lat (usec): min=7353, max=30141, avg=15095.73, stdev=3468.44 00:30:24.621 clat percentiles (usec): 00:30:24.621 | 1.00th=[ 9372], 5.00th=[10159], 10.00th=[10814], 20.00th=[11863], 00:30:24.621 | 30.00th=[12780], 40.00th=[13698], 50.00th=[14615], 60.00th=[15664], 00:30:24.621 | 70.00th=[16712], 80.00th=[18220], 90.00th=[19792], 95.00th=[21103], 00:30:24.621 | 99.00th=[23200], 99.50th=[24511], 99.90th=[29492], 99.95th=[29754], 00:30:24.621 | 99.99th=[30016] 00:30:24.621 bw ( KiB/s): min=45120, max=65536, per=90.93%, avg=52776.00, stdev=8891.68, samples=4 00:30:24.621 iops : min= 2820, max= 4096, avg=3298.50, stdev=555.73, samples=4 00:30:24.621 lat (msec) : 4=0.03%, 10=18.41%, 20=78.22%, 50=3.34% 00:30:24.621 cpu : usr=65.44%, sys=28.63%, ctx=38, majf=0, minf=53 00:30:24.621 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:30:24.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:24.621 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:24.621 issued rwts: total=12758,6642,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:24.621 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:24.621 00:30:24.621 Run status group 0 (all jobs): 00:30:24.621 READ: bw=99.4MiB/s (104MB/s), 99.4MiB/s-99.4MiB/s (104MB/s-104MB/s), io=199MiB (209MB), run=2005-2005msec 00:30:24.621 WRITE: bw=56.7MiB/s (59.4MB/s), 56.7MiB/s-56.7MiB/s (59.4MB/s-59.4MB/s), io=104MiB (109MB), run=1831-1831msec 00:30:24.621 23:00:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:24.621 23:00:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:30:24.621 23:00:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:30:24.621 23:00:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:30:24.621 23:00:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # bdfs=() 00:30:24.621 23:00:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # local bdfs 00:30:24.621 23:00:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:24.621 23:00:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:24.621 23:00:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:30:24.621 23:00:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:30:24.621 23:00:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:30:24.621 23:00:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:30:27.893 Nvme0n1 00:30:27.893 23:00:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:31.176 23:00:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=47d5e4df-63a1-4cfc-ac0d-802359d87e83 00:30:31.176 23:00:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 47d5e4df-63a1-4cfc-ac0d-802359d87e83 00:30:31.176 23:00:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=47d5e4df-63a1-4cfc-ac0d-802359d87e83 00:30:31.176 23:00:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:31.176 23:00:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:30:31.176 23:00:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:30:31.176 23:00:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:31.176 23:00:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:31.176 { 00:30:31.176 "uuid": "47d5e4df-63a1-4cfc-ac0d-802359d87e83", 00:30:31.176 "name": "lvs_0", 00:30:31.176 "base_bdev": "Nvme0n1", 00:30:31.176 "total_data_clusters": 930, 00:30:31.176 "free_clusters": 930, 00:30:31.176 "block_size": 512, 00:30:31.176 "cluster_size": 1073741824 00:30:31.176 } 00:30:31.176 ]' 00:30:31.176 23:00:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="47d5e4df-63a1-4cfc-ac0d-802359d87e83") .free_clusters' 00:30:31.176 23:00:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=930 00:30:31.176 23:00:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="47d5e4df-63a1-4cfc-ac0d-802359d87e83") .cluster_size' 00:30:31.176 23:00:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=1073741824 00:30:31.176 23:00:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=952320 00:30:31.176 23:00:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 952320 00:30:31.176 952320 00:30:31.176 23:00:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:30:31.434 75483dc8-066d-4834-8bc2-2c7e58451f26 00:30:31.434 23:00:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:31.690 23:00:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:31.947 23:00:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:32.208 23:00:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:32.208 23:00:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:32.208 23:00:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:32.208 23:00:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:32.208 23:00:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:32.208 23:00:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:32.208 23:00:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:32.208 23:00:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:32.208 23:00:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:32.208 23:00:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:32.208 23:00:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:32.208 23:00:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:32.208 23:00:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:32.208 23:00:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:32.209 23:00:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:32.209 23:00:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:32.209 23:00:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:32.209 23:00:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:32.209 23:00:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:32.209 23:00:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:32.209 23:00:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:32.209 23:00:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:32.466 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:32.466 fio-3.35 00:30:32.466 Starting 1 thread 00:30:32.466 EAL: No free 2048 kB hugepages reported on node 1 00:30:34.986 00:30:34.986 test: (groupid=0, jobs=1): err= 0: pid=3653817: Fri Jul 26 23:00:27 2024 00:30:34.986 read: IOPS=6058, BW=23.7MiB/s (24.8MB/s)(47.5MiB/2007msec) 00:30:34.986 slat (nsec): min=1988, max=131047, avg=2572.82, stdev=2079.37 00:30:34.986 clat (usec): min=1002, max=171752, avg=11671.85, stdev=11634.25 00:30:34.986 lat (usec): min=1005, max=171790, avg=11674.42, stdev=11634.48 00:30:34.986 clat percentiles (msec): 00:30:34.986 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:30:34.986 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:30:34.986 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 13], 00:30:34.986 | 99.00th=[ 14], 99.50th=[ 159], 99.90th=[ 171], 99.95th=[ 171], 00:30:34.986 | 99.99th=[ 171] 00:30:34.986 bw ( KiB/s): min=16968, max=26680, per=99.71%, avg=24164.00, stdev=4798.27, samples=4 00:30:34.986 iops : min= 4242, max= 6670, avg=6041.00, stdev=1199.57, samples=4 00:30:34.986 write: IOPS=6037, BW=23.6MiB/s (24.7MB/s)(47.3MiB/2007msec); 0 zone resets 00:30:34.986 slat (nsec): min=2095, max=94013, avg=2660.31, stdev=1448.10 00:30:34.986 clat (usec): min=327, max=169305, avg=9330.00, stdev=10895.05 00:30:34.986 lat (usec): min=329, max=169310, avg=9332.66, stdev=10895.27 00:30:34.986 clat percentiles (msec): 00:30:34.986 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 8], 00:30:34.986 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:30:34.986 | 70.00th=[ 9], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 10], 00:30:34.986 | 99.00th=[ 11], 99.50th=[ 16], 99.90th=[ 169], 99.95th=[ 169], 00:30:34.986 | 99.99th=[ 169] 00:30:34.986 bw ( KiB/s): min=17960, max=26368, per=99.94%, avg=24138.00, stdev=4123.42, samples=4 00:30:34.986 iops : min= 4490, max= 6592, avg=6034.50, stdev=1030.85, samples=4 00:30:34.986 lat (usec) : 500=0.01%, 750=0.01% 00:30:34.986 lat (msec) : 2=0.03%, 4=0.12%, 10=56.50%, 20=42.80%, 250=0.53% 00:30:34.986 cpu : usr=56.88%, sys=38.09%, ctx=72, majf=0, minf=31 00:30:34.986 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:34.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.986 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:34.986 issued rwts: total=12160,12118,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.986 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:34.986 00:30:34.986 Run status group 0 (all jobs): 00:30:34.986 READ: bw=23.7MiB/s (24.8MB/s), 23.7MiB/s-23.7MiB/s (24.8MB/s-24.8MB/s), io=47.5MiB (49.8MB), run=2007-2007msec 00:30:34.986 WRITE: bw=23.6MiB/s (24.7MB/s), 23.6MiB/s-23.6MiB/s (24.7MB/s-24.7MB/s), io=47.3MiB (49.6MB), run=2007-2007msec 00:30:34.986 23:00:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:34.986 23:00:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:35.920 23:00:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=9c7a659d-835a-4269-98b7-933da4cb6f78 00:30:35.920 23:00:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 9c7a659d-835a-4269-98b7-933da4cb6f78 00:30:35.920 23:00:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=9c7a659d-835a-4269-98b7-933da4cb6f78 00:30:35.920 23:00:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:35.920 23:00:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:30:35.920 23:00:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:30:35.920 23:00:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:36.177 23:00:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:36.177 { 00:30:36.177 "uuid": "47d5e4df-63a1-4cfc-ac0d-802359d87e83", 00:30:36.177 "name": "lvs_0", 00:30:36.177 "base_bdev": "Nvme0n1", 00:30:36.177 "total_data_clusters": 930, 00:30:36.178 "free_clusters": 0, 00:30:36.178 "block_size": 512, 00:30:36.178 "cluster_size": 1073741824 00:30:36.178 }, 00:30:36.178 { 00:30:36.178 "uuid": "9c7a659d-835a-4269-98b7-933da4cb6f78", 00:30:36.178 "name": "lvs_n_0", 00:30:36.178 "base_bdev": "75483dc8-066d-4834-8bc2-2c7e58451f26", 00:30:36.178 "total_data_clusters": 237847, 00:30:36.178 "free_clusters": 237847, 00:30:36.178 "block_size": 512, 00:30:36.178 "cluster_size": 4194304 00:30:36.178 } 00:30:36.178 ]' 00:30:36.178 23:00:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="9c7a659d-835a-4269-98b7-933da4cb6f78") .free_clusters' 00:30:36.178 23:00:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=237847 00:30:36.178 23:00:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="9c7a659d-835a-4269-98b7-933da4cb6f78") .cluster_size' 00:30:36.178 23:00:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=4194304 00:30:36.178 23:00:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=951388 00:30:36.178 23:00:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 951388 00:30:36.178 951388 00:30:36.178 23:00:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:30:37.111 a042c57e-7bdb-4a85-8931-bf27cbde63e8 00:30:37.111 23:00:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:37.111 23:00:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:37.369 23:00:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:37.626 23:00:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:37.626 23:00:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:37.626 23:00:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:37.626 23:00:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:37.626 23:00:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:37.626 23:00:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:37.626 23:00:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:37.626 23:00:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:37.626 23:00:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:37.626 23:00:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:37.626 23:00:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:37.626 23:00:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:37.626 23:00:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:37.626 23:00:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:37.626 23:00:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:37.626 23:00:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:37.626 23:00:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:37.626 23:00:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:37.626 23:00:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:37.626 23:00:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:37.626 23:00:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:37.626 23:00:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:37.885 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:37.885 fio-3.35 00:30:37.885 Starting 1 thread 00:30:37.885 EAL: No free 2048 kB hugepages reported on node 1 00:30:40.421 00:30:40.421 test: (groupid=0, jobs=1): err= 0: pid=3654544: Fri Jul 26 23:00:32 2024 00:30:40.421 read: IOPS=5833, BW=22.8MiB/s (23.9MB/s)(45.8MiB/2009msec) 00:30:40.421 slat (usec): min=2, max=153, avg= 2.67, stdev= 2.15 00:30:40.421 clat (usec): min=4721, max=19318, avg=12170.09, stdev=992.55 00:30:40.421 lat (usec): min=4726, max=19321, avg=12172.76, stdev=992.43 00:30:40.421 clat percentiles (usec): 00:30:40.421 | 1.00th=[ 9896], 5.00th=[10683], 10.00th=[10945], 20.00th=[11338], 00:30:40.421 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12125], 60.00th=[12387], 00:30:40.421 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13435], 95.00th=[13698], 00:30:40.421 | 99.00th=[14353], 99.50th=[14615], 99.90th=[17957], 99.95th=[17957], 00:30:40.421 | 99.99th=[19268] 00:30:40.421 bw ( KiB/s): min=22104, max=23856, per=99.87%, avg=23304.00, stdev=807.51, samples=4 00:30:40.421 iops : min= 5526, max= 5964, avg=5826.00, stdev=201.88, samples=4 00:30:40.421 write: IOPS=5820, BW=22.7MiB/s (23.8MB/s)(45.7MiB/2009msec); 0 zone resets 00:30:40.421 slat (usec): min=2, max=114, avg= 2.76, stdev= 1.71 00:30:40.421 clat (usec): min=2388, max=17704, avg=9662.51, stdev=888.30 00:30:40.421 lat (usec): min=2394, max=17707, avg=9665.28, stdev=888.24 00:30:40.421 clat percentiles (usec): 00:30:40.421 | 1.00th=[ 7635], 5.00th=[ 8291], 10.00th=[ 8586], 20.00th=[ 8979], 00:30:40.421 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9896], 00:30:40.421 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10683], 95.00th=[10945], 00:30:40.421 | 99.00th=[11600], 99.50th=[11994], 99.90th=[15270], 99.95th=[16712], 00:30:40.421 | 99.99th=[16712] 00:30:40.421 bw ( KiB/s): min=23128, max=23360, per=99.94%, avg=23270.00, stdev=99.36, samples=4 00:30:40.421 iops : min= 5782, max= 5840, avg=5817.50, stdev=24.84, samples=4 00:30:40.421 lat (msec) : 4=0.05%, 10=34.07%, 20=65.88% 00:30:40.421 cpu : usr=56.72%, sys=38.89%, ctx=115, majf=0, minf=31 00:30:40.421 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:40.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:40.421 issued rwts: total=11720,11694,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.421 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:40.421 00:30:40.421 Run status group 0 (all jobs): 00:30:40.421 READ: bw=22.8MiB/s (23.9MB/s), 22.8MiB/s-22.8MiB/s (23.9MB/s-23.9MB/s), io=45.8MiB (48.0MB), run=2009-2009msec 00:30:40.421 WRITE: bw=22.7MiB/s (23.8MB/s), 22.7MiB/s-22.7MiB/s (23.8MB/s-23.8MB/s), io=45.7MiB (47.9MB), run=2009-2009msec 00:30:40.421 23:00:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:40.421 23:00:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:30:40.421 23:00:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:44.609 23:00:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:44.609 23:00:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:47.940 23:00:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:47.940 23:00:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:49.843 23:00:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:49.843 23:00:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:49.843 23:00:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:30:49.843 23:00:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:49.843 23:00:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:30:49.843 23:00:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:49.843 23:00:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:30:49.843 23:00:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:49.843 23:00:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:49.843 rmmod nvme_tcp 00:30:49.843 rmmod nvme_fabrics 00:30:49.843 rmmod nvme_keyring 00:30:49.843 23:00:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:49.843 23:00:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:30:49.843 23:00:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:30:49.843 23:00:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3651725 ']' 00:30:49.843 23:00:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3651725 00:30:49.843 23:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 3651725 ']' 00:30:49.843 23:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 3651725 00:30:49.843 23:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:30:49.843 23:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:49.843 23:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3651725 00:30:49.843 23:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:49.843 23:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:49.843 23:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3651725' 00:30:49.843 killing process with pid 3651725 00:30:49.843 23:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 3651725 00:30:49.843 23:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 3651725 00:30:50.102 23:00:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:50.102 23:00:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:50.102 23:00:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:50.102 23:00:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:50.102 23:00:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:50.102 23:00:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:50.102 23:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:50.102 23:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:52.007 23:00:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:52.007 00:30:52.007 real 0m36.928s 00:30:52.007 user 2m20.873s 00:30:52.007 sys 0m7.218s 00:30:52.007 23:00:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:52.007 23:00:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.007 ************************************ 00:30:52.007 END TEST nvmf_fio_host 00:30:52.007 ************************************ 00:30:52.007 23:00:44 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:52.007 23:00:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:52.007 23:00:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:52.007 23:00:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:52.007 ************************************ 00:30:52.007 START TEST nvmf_failover 00:30:52.007 ************************************ 00:30:52.007 23:00:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:52.266 * Looking for test storage... 00:30:52.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:30:52.266 23:00:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:54.170 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:54.170 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:54.170 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:54.170 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:54.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:54.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:30:54.170 00:30:54.170 --- 10.0.0.2 ping statistics --- 00:30:54.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:54.170 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:54.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:54.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:30:54.170 00:30:54.170 --- 10.0.0.1 ping statistics --- 00:30:54.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:54.170 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:54.170 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:54.171 23:00:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:54.171 23:00:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:54.171 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3657789 00:30:54.171 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:54.171 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3657789 00:30:54.171 23:00:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3657789 ']' 00:30:54.171 23:00:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:54.171 23:00:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:54.171 23:00:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:54.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:54.171 23:00:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:54.171 23:00:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:54.431 [2024-07-26 23:00:46.692980] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:30:54.431 [2024-07-26 23:00:46.693080] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:54.431 EAL: No free 2048 kB hugepages reported on node 1 00:30:54.431 [2024-07-26 23:00:46.762602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:54.431 [2024-07-26 23:00:46.855535] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:54.431 [2024-07-26 23:00:46.855610] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:54.431 [2024-07-26 23:00:46.855638] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:54.431 [2024-07-26 23:00:46.855651] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:54.431 [2024-07-26 23:00:46.855663] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:54.431 [2024-07-26 23:00:46.855783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:54.431 [2024-07-26 23:00:46.855881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:54.431 [2024-07-26 23:00:46.855883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:54.690 23:00:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:54.690 23:00:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:54.690 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:54.690 23:00:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:54.690 23:00:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:54.690 23:00:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:54.690 23:00:46 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:54.947 [2024-07-26 23:00:47.275145] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:54.948 23:00:47 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:55.205 Malloc0 00:30:55.205 23:00:47 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:55.463 23:00:47 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:55.721 23:00:48 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:55.978 [2024-07-26 23:00:48.425773] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:55.978 23:00:48 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:56.235 [2024-07-26 23:00:48.694572] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:56.235 23:00:48 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:56.492 [2024-07-26 23:00:48.939451] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:56.492 23:00:48 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3658082 00:30:56.492 23:00:48 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:56.492 23:00:48 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:56.492 23:00:48 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3658082 /var/tmp/bdevperf.sock 00:30:56.492 23:00:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3658082 ']' 00:30:56.492 23:00:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:56.492 23:00:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:56.492 23:00:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:56.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:56.492 23:00:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:56.492 23:00:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:57.063 23:00:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:57.063 23:00:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:57.063 23:00:49 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:57.321 NVMe0n1 00:30:57.321 23:00:49 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:57.580 00:30:57.840 23:00:50 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3658214 00:30:57.840 23:00:50 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:57.840 23:00:50 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:58.776 23:00:51 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:59.034 [2024-07-26 23:00:51.315216] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713090 is same with the state(5) to be set 00:30:59.034 [2024-07-26 23:00:51.315321] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713090 is same with the state(5) to be set 00:30:59.034 [2024-07-26 23:00:51.315343] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713090 is same with the state(5) to be set 00:30:59.034 [2024-07-26 23:00:51.315371] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713090 is same with the state(5) to be set 00:30:59.034 [2024-07-26 23:00:51.315383] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713090 is same with the state(5) to be set 00:30:59.034 [2024-07-26 23:00:51.315394] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713090 is same with the state(5) to be set 00:30:59.034 [2024-07-26 23:00:51.315406] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713090 is same with the state(5) to be set 00:30:59.034 [2024-07-26 23:00:51.315417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713090 is same with the state(5) to be set 00:30:59.034 [2024-07-26 23:00:51.315429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713090 is same with the state(5) to be set 00:30:59.034 23:00:51 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:31:02.322 23:00:54 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:02.322 00:31:02.322 23:00:54 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:02.579 [2024-07-26 23:00:54.902542] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714610 is same with the state(5) to be set 00:31:02.579 [2024-07-26 23:00:54.902610] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714610 is same with the state(5) to be set 00:31:02.579 [2024-07-26 23:00:54.902634] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714610 is same with the state(5) to be set 00:31:02.579 [2024-07-26 23:00:54.902646] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714610 is same with the state(5) to be set 00:31:02.580 [2024-07-26 23:00:54.902674] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714610 is same with the state(5) to be set 00:31:02.580 [2024-07-26 23:00:54.902688] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714610 is same with the state(5) to be set 00:31:02.580 [2024-07-26 23:00:54.902700] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714610 is same with the state(5) to be set 00:31:02.580 [2024-07-26 23:00:54.902712] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714610 is same with the state(5) to be set 00:31:02.580 [2024-07-26 23:00:54.902725] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714610 is same with the state(5) to be set 00:31:02.580 [2024-07-26 23:00:54.902738] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714610 is same with the state(5) to be set 00:31:02.580 [2024-07-26 23:00:54.902750] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714610 is same with the state(5) to be set 00:31:02.580 [2024-07-26 23:00:54.902762] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714610 is same with the state(5) to be set 00:31:02.580 [2024-07-26 23:00:54.902775] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714610 is same with the state(5) to be set 00:31:02.580 [2024-07-26 23:00:54.902787] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714610 is same with the state(5) to be set 00:31:02.580 [2024-07-26 23:00:54.902800] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714610 is same with the state(5) to be set 00:31:02.580 [2024-07-26 23:00:54.902813] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714610 is same with the state(5) to be set 00:31:02.580 [2024-07-26 23:00:54.902825] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714610 is same with the state(5) to be set 00:31:02.580 [2024-07-26 23:00:54.902838] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714610 is same with the state(5) to be set 00:31:02.580 [2024-07-26 23:00:54.902851] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714610 is same with the state(5) to be set 00:31:02.580 [2024-07-26 23:00:54.902863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714610 is same with the state(5) to be set 00:31:02.580 [2024-07-26 23:00:54.902875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714610 is same with the state(5) to be set 00:31:02.580 [2024-07-26 23:00:54.902886] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714610 is same with the state(5) to be set 00:31:02.580 [2024-07-26 23:00:54.902898] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714610 is same with the state(5) to be set 00:31:02.580 23:00:54 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:31:05.870 23:00:57 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:05.870 [2024-07-26 23:00:58.197915] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:05.870 23:00:58 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:31:06.801 23:00:59 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:07.058 [2024-07-26 23:00:59.449594] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.058 [2024-07-26 23:00:59.449661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.058 [2024-07-26 23:00:59.449684] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.058 [2024-07-26 23:00:59.449696] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.058 [2024-07-26 23:00:59.449735] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.449748] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.449759] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.449771] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.449798] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.449811] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.449824] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.449836] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.449849] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.449862] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.449874] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.449886] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.449899] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.449912] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.449925] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.449938] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.449950] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.449962] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.449974] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.449986] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.449998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.450009] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.450021] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.450033] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.450044] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.450056] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.450079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.450095] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.450108] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.450120] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.450132] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.450145] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.450157] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.450169] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.450181] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.450208] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.450221] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.450232] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.450244] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.450256] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 [2024-07-26 23:00:59.450268] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714980 is same with the state(5) to be set 00:31:07.059 23:00:59 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 3658214 00:31:13.628 0 00:31:13.628 23:01:05 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 3658082 00:31:13.628 23:01:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3658082 ']' 00:31:13.628 23:01:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3658082 00:31:13.628 23:01:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:31:13.628 23:01:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:13.628 23:01:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3658082 00:31:13.628 23:01:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:13.628 23:01:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:13.628 23:01:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3658082' 00:31:13.628 killing process with pid 3658082 00:31:13.628 23:01:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3658082 00:31:13.628 23:01:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3658082 00:31:13.628 23:01:05 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:13.628 [2024-07-26 23:00:49.003124] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:31:13.628 [2024-07-26 23:00:49.003214] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3658082 ] 00:31:13.628 EAL: No free 2048 kB hugepages reported on node 1 00:31:13.628 [2024-07-26 23:00:49.064786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:13.628 [2024-07-26 23:00:49.152691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:13.628 Running I/O for 15 seconds... 00:31:13.628 [2024-07-26 23:00:51.317069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.628 [2024-07-26 23:00:51.317116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.628 [2024-07-26 23:00:51.317146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.629 [2024-07-26 23:00:51.317164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.629 [2024-07-26 23:00:51.317183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:75032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.629 [2024-07-26 23:00:51.317198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.629 [2024-07-26 23:00:51.317214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:75040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.629 [2024-07-26 23:00:51.317230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.629 [2024-07-26 23:00:51.317247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:75048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.629 [2024-07-26 23:00:51.317262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.629 [2024-07-26 23:00:51.317278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:75056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.629 [2024-07-26 23:00:51.317292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.629 [2024-07-26 23:00:51.317308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.629 [2024-07-26 23:00:51.317322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.629 [2024-07-26 23:00:51.317338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:75072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.629 [2024-07-26 23:00:51.317353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.629 [2024-07-26 23:00:51.317386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.629 [2024-07-26 23:00:51.317400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.629 [2024-07-26 23:00:51.317416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:75088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.629 [2024-07-26 23:00:51.317445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.629 [2024-07-26 23:00:51.317460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.629 [2024-07-26 23:00:51.317474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.629 [2024-07-26 23:00:51.317499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.629 [2024-07-26 23:00:51.317514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.629 [2024-07-26 23:00:51.317529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.629 [2024-07-26 23:00:51.317542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.629 [2024-07-26 23:00:51.317557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.629 [2024-07-26 23:00:51.317570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.629 [2024-07-26 23:00:51.317585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.629 [2024-07-26 23:00:51.317598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.629 [2024-07-26 23:00:51.317613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:75136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.629 [2024-07-26 23:00:51.317626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.629 [2024-07-26 23:00:51.317641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.629 [2024-07-26 23:00:51.317655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.629 [2024-07-26 23:00:51.317670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:75152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.629 [2024-07-26 23:00:51.317684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.629 [2024-07-26 23:00:51.317699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.629 [2024-07-26 23:00:51.317712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.629 [2024-07-26 23:00:51.317727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.629 [2024-07-26 23:00:51.317741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.629 [2024-07-26 23:00:51.317755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.629 [2024-07-26 23:00:51.317769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.629 [2024-07-26 23:00:51.317784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.629 [2024-07-26 23:00:51.317798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.629 [2024-07-26 23:00:51.317812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.629 [2024-07-26 23:00:51.317826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.629 [2024-07-26 23:00:51.317841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.629 [2024-07-26 23:00:51.317858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.629 [2024-07-26 23:00:51.317873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.629 [2024-07-26 23:00:51.317886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.629 [2024-07-26 23:00:51.317901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.629 [2024-07-26 23:00:51.317915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.629 [2024-07-26 23:00:51.317930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.629 [2024-07-26 23:00:51.317943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.629 [2024-07-26 23:00:51.317958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:75232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.629 [2024-07-26 23:00:51.317971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.629 [2024-07-26 23:00:51.317987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.629 [2024-07-26 23:00:51.318000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.629 [2024-07-26 23:00:51.318015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.629 [2024-07-26 23:00:51.318028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.629 [2024-07-26 23:00:51.318065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.629 [2024-07-26 23:00:51.318082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.629 [2024-07-26 23:00:51.318097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.629 [2024-07-26 23:00:51.318127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.630 [2024-07-26 23:00:51.318142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:75272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.630 [2024-07-26 23:00:51.318157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.630 [2024-07-26 23:00:51.318172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.630 [2024-07-26 23:00:51.318186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.630 [2024-07-26 23:00:51.318201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.630 [2024-07-26 23:00:51.318215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.630 [2024-07-26 23:00:51.318230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.630 [2024-07-26 23:00:51.318245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.630 [2024-07-26 23:00:51.318263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.630 [2024-07-26 23:00:51.318278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.630 [2024-07-26 23:00:51.318293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:75312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.630 [2024-07-26 23:00:51.318307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.630 [2024-07-26 23:00:51.318323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.630 [2024-07-26 23:00:51.318337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.630 [2024-07-26 23:00:51.318353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:75328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.630 [2024-07-26 23:00:51.318382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.630 [2024-07-26 23:00:51.318397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.630 [2024-07-26 23:00:51.318411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.630 [2024-07-26 23:00:51.318440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.630 [2024-07-26 23:00:51.318455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.630 [2024-07-26 23:00:51.318469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:75352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.630 [2024-07-26 23:00:51.318482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.630 [2024-07-26 23:00:51.318496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.630 [2024-07-26 23:00:51.318510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.630 [2024-07-26 23:00:51.318524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:75368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.630 [2024-07-26 23:00:51.318537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.630 [2024-07-26 23:00:51.318552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.630 [2024-07-26 23:00:51.318565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.630 [2024-07-26 23:00:51.318579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.630 [2024-07-26 23:00:51.318592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.630 [2024-07-26 23:00:51.318607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.630 [2024-07-26 23:00:51.318620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.630 [2024-07-26 23:00:51.318634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:75400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.630 [2024-07-26 23:00:51.318648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.630 [2024-07-26 23:00:51.318666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.630 [2024-07-26 23:00:51.318679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.630 [2024-07-26 23:00:51.318693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.630 [2024-07-26 23:00:51.318707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.630 [2024-07-26 23:00:51.318721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.630 [2024-07-26 23:00:51.318734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.630 [2024-07-26 23:00:51.318748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.630 [2024-07-26 23:00:51.318761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.630 [2024-07-26 23:00:51.318776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.630 [2024-07-26 23:00:51.318790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.630 [2024-07-26 23:00:51.318805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.630 [2024-07-26 23:00:51.318818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.630 [2024-07-26 23:00:51.318832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.630 [2024-07-26 23:00:51.318845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.630 [2024-07-26 23:00:51.318860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.630 [2024-07-26 23:00:51.318873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.630 [2024-07-26 23:00:51.318887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.630 [2024-07-26 23:00:51.318901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.630 [2024-07-26 23:00:51.318915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.630 [2024-07-26 23:00:51.318928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.630 [2024-07-26 23:00:51.318943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.630 [2024-07-26 23:00:51.318972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.630 [2024-07-26 23:00:51.318987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.630 [2024-07-26 23:00:51.319001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.630 [2024-07-26 23:00:51.319016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.630 [2024-07-26 23:00:51.319034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.631 [2024-07-26 23:00:51.319049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.631 [2024-07-26 23:00:51.319086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.631 [2024-07-26 23:00:51.319105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.631 [2024-07-26 23:00:51.319120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.631 [2024-07-26 23:00:51.319135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.631 [2024-07-26 23:00:51.319149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.631 [2024-07-26 23:00:51.319165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.631 [2024-07-26 23:00:51.319180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.631 [2024-07-26 23:00:51.319195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.631 [2024-07-26 23:00:51.319209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.631 [2024-07-26 23:00:51.319224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.631 [2024-07-26 23:00:51.319238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.631 [2024-07-26 23:00:51.319254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.631 [2024-07-26 23:00:51.319268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.631 [2024-07-26 23:00:51.319283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.631 [2024-07-26 23:00:51.319297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.631 [2024-07-26 23:00:51.319312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.631 [2024-07-26 23:00:51.319326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.631 [2024-07-26 23:00:51.319341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.631 [2024-07-26 23:00:51.319355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.631 [2024-07-26 23:00:51.319385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.631 [2024-07-26 23:00:51.319399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.631 [2024-07-26 23:00:51.319415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.631 [2024-07-26 23:00:51.319428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.631 [2024-07-26 23:00:51.319446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.631 [2024-07-26 23:00:51.319460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.631 [2024-07-26 23:00:51.319475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.631 [2024-07-26 23:00:51.319488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.631 [2024-07-26 23:00:51.319504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.631 [2024-07-26 23:00:51.319517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.631 [2024-07-26 23:00:51.319532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:75632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.631 [2024-07-26 23:00:51.319546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.631 [2024-07-26 23:00:51.319561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.631 [2024-07-26 23:00:51.319574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.631 [2024-07-26 23:00:51.319589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.631 [2024-07-26 23:00:51.319602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.631 [2024-07-26 23:00:51.319617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.631 [2024-07-26 23:00:51.319631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.631 [2024-07-26 23:00:51.319646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.631 [2024-07-26 23:00:51.319659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.631 [2024-07-26 23:00:51.319674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.631 [2024-07-26 23:00:51.319688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.631 [2024-07-26 23:00:51.319702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.631 [2024-07-26 23:00:51.319716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.631 [2024-07-26 23:00:51.319730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.631 [2024-07-26 23:00:51.319744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.631 [2024-07-26 23:00:51.319758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.631 [2024-07-26 23:00:51.319772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.631 [2024-07-26 23:00:51.319787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.631 [2024-07-26 23:00:51.319804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.631 [2024-07-26 23:00:51.319819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.631 [2024-07-26 23:00:51.319833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.631 [2024-07-26 23:00:51.319848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.631 [2024-07-26 23:00:51.319862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.631 [2024-07-26 23:00:51.319877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.631 [2024-07-26 23:00:51.319890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.631 [2024-07-26 23:00:51.319905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.646 [2024-07-26 23:00:51.319919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.646 [2024-07-26 23:00:51.319934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.646 [2024-07-26 23:00:51.319947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.646 [2024-07-26 23:00:51.319985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.646 [2024-07-26 23:00:51.320003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75752 len:8 PRP1 0x0 PRP2 0x0 00:31:13.646 [2024-07-26 23:00:51.320016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.646 [2024-07-26 23:00:51.320034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.646 [2024-07-26 23:00:51.320046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.646 [2024-07-26 23:00:51.320057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75760 len:8 PRP1 0x0 PRP2 0x0 00:31:13.646 [2024-07-26 23:00:51.320094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.646 [2024-07-26 23:00:51.320109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.646 [2024-07-26 23:00:51.320121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.646 [2024-07-26 23:00:51.320132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75768 len:8 PRP1 0x0 PRP2 0x0 00:31:13.646 [2024-07-26 23:00:51.320145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.646 [2024-07-26 23:00:51.320158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.646 [2024-07-26 23:00:51.320170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.646 [2024-07-26 23:00:51.320181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75776 len:8 PRP1 0x0 PRP2 0x0 00:31:13.646 [2024-07-26 23:00:51.320194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.647 [2024-07-26 23:00:51.320211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.647 [2024-07-26 23:00:51.320222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.647 [2024-07-26 23:00:51.320233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75784 len:8 PRP1 0x0 PRP2 0x0 00:31:13.647 [2024-07-26 23:00:51.320250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.647 [2024-07-26 23:00:51.320264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.647 [2024-07-26 23:00:51.320275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.647 [2024-07-26 23:00:51.320286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75792 len:8 PRP1 0x0 PRP2 0x0 00:31:13.647 [2024-07-26 23:00:51.320300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.647 [2024-07-26 23:00:51.320313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.647 [2024-07-26 23:00:51.320324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.647 [2024-07-26 23:00:51.320336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75800 len:8 PRP1 0x0 PRP2 0x0 00:31:13.647 [2024-07-26 23:00:51.320349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.647 [2024-07-26 23:00:51.320361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.647 [2024-07-26 23:00:51.320388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.647 [2024-07-26 23:00:51.320400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75808 len:8 PRP1 0x0 PRP2 0x0 00:31:13.647 [2024-07-26 23:00:51.320412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.647 [2024-07-26 23:00:51.320425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.647 [2024-07-26 23:00:51.320441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.647 [2024-07-26 23:00:51.320452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75816 len:8 PRP1 0x0 PRP2 0x0 00:31:13.647 [2024-07-26 23:00:51.320465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.647 [2024-07-26 23:00:51.320478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.647 [2024-07-26 23:00:51.320489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.647 [2024-07-26 23:00:51.320500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75824 len:8 PRP1 0x0 PRP2 0x0 00:31:13.647 [2024-07-26 23:00:51.320519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.647 [2024-07-26 23:00:51.320531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.647 [2024-07-26 23:00:51.320542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.647 [2024-07-26 23:00:51.320553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75832 len:8 PRP1 0x0 PRP2 0x0 00:31:13.647 [2024-07-26 23:00:51.320566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.647 [2024-07-26 23:00:51.320579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.647 [2024-07-26 23:00:51.320589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.647 [2024-07-26 23:00:51.320601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75840 len:8 PRP1 0x0 PRP2 0x0 00:31:13.647 [2024-07-26 23:00:51.320613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.647 [2024-07-26 23:00:51.320626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.647 [2024-07-26 23:00:51.320637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.647 [2024-07-26 23:00:51.320654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75848 len:8 PRP1 0x0 PRP2 0x0 00:31:13.647 [2024-07-26 23:00:51.320667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.647 [2024-07-26 23:00:51.320680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.647 [2024-07-26 23:00:51.320691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.647 [2024-07-26 23:00:51.320702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75856 len:8 PRP1 0x0 PRP2 0x0 00:31:13.647 [2024-07-26 23:00:51.320715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.647 [2024-07-26 23:00:51.320728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.647 [2024-07-26 23:00:51.320738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.647 [2024-07-26 23:00:51.320749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75864 len:8 PRP1 0x0 PRP2 0x0 00:31:13.647 [2024-07-26 23:00:51.320762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.647 [2024-07-26 23:00:51.320774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.647 [2024-07-26 23:00:51.320785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.647 [2024-07-26 23:00:51.320796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75872 len:8 PRP1 0x0 PRP2 0x0 00:31:13.647 [2024-07-26 23:00:51.320809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.647 [2024-07-26 23:00:51.320821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.647 [2024-07-26 23:00:51.320833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.647 [2024-07-26 23:00:51.320844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75880 len:8 PRP1 0x0 PRP2 0x0 00:31:13.647 [2024-07-26 23:00:51.320857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.647 [2024-07-26 23:00:51.320869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.647 [2024-07-26 23:00:51.320880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.647 [2024-07-26 23:00:51.320891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75888 len:8 PRP1 0x0 PRP2 0x0 00:31:13.647 [2024-07-26 23:00:51.320904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.647 [2024-07-26 23:00:51.320917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.647 [2024-07-26 23:00:51.320928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.647 [2024-07-26 23:00:51.320939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75896 len:8 PRP1 0x0 PRP2 0x0 00:31:13.647 [2024-07-26 23:00:51.320951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.647 [2024-07-26 23:00:51.320964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.647 [2024-07-26 23:00:51.320975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.647 [2024-07-26 23:00:51.320986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75904 len:8 PRP1 0x0 PRP2 0x0 00:31:13.647 [2024-07-26 23:00:51.320999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.647 [2024-07-26 23:00:51.321015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.647 [2024-07-26 23:00:51.321026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.647 [2024-07-26 23:00:51.321038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75912 len:8 PRP1 0x0 PRP2 0x0 00:31:13.647 [2024-07-26 23:00:51.321050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.647 [2024-07-26 23:00:51.321088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.647 [2024-07-26 23:00:51.321101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.648 [2024-07-26 23:00:51.321113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75920 len:8 PRP1 0x0 PRP2 0x0 00:31:13.648 [2024-07-26 23:00:51.321126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.648 [2024-07-26 23:00:51.321140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.648 [2024-07-26 23:00:51.321151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.648 [2024-07-26 23:00:51.321162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75928 len:8 PRP1 0x0 PRP2 0x0 00:31:13.648 [2024-07-26 23:00:51.321175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.648 [2024-07-26 23:00:51.321189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.648 [2024-07-26 23:00:51.321200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.648 [2024-07-26 23:00:51.321211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75936 len:8 PRP1 0x0 PRP2 0x0 00:31:13.648 [2024-07-26 23:00:51.321224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.648 [2024-07-26 23:00:51.321237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.648 [2024-07-26 23:00:51.321249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.648 [2024-07-26 23:00:51.321261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75944 len:8 PRP1 0x0 PRP2 0x0 00:31:13.648 [2024-07-26 23:00:51.321274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.648 [2024-07-26 23:00:51.321288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.648 [2024-07-26 23:00:51.321299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.648 [2024-07-26 23:00:51.321310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75952 len:8 PRP1 0x0 PRP2 0x0 00:31:13.648 [2024-07-26 23:00:51.321323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.648 [2024-07-26 23:00:51.321346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.648 [2024-07-26 23:00:51.321356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.648 [2024-07-26 23:00:51.321383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75960 len:8 PRP1 0x0 PRP2 0x0 00:31:13.648 [2024-07-26 23:00:51.321396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.648 [2024-07-26 23:00:51.321409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.648 [2024-07-26 23:00:51.321420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.648 [2024-07-26 23:00:51.321431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75968 len:8 PRP1 0x0 PRP2 0x0 00:31:13.648 [2024-07-26 23:00:51.321446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.648 [2024-07-26 23:00:51.321460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.648 [2024-07-26 23:00:51.321471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.648 [2024-07-26 23:00:51.321482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75976 len:8 PRP1 0x0 PRP2 0x0 00:31:13.648 [2024-07-26 23:00:51.321495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.648 [2024-07-26 23:00:51.321507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.648 [2024-07-26 23:00:51.321518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.648 [2024-07-26 23:00:51.321529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75984 len:8 PRP1 0x0 PRP2 0x0 00:31:13.648 [2024-07-26 23:00:51.321542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.648 [2024-07-26 23:00:51.321554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.648 [2024-07-26 23:00:51.321565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.648 [2024-07-26 23:00:51.321576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75992 len:8 PRP1 0x0 PRP2 0x0 00:31:13.648 [2024-07-26 23:00:51.321589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.648 [2024-07-26 23:00:51.321602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.648 [2024-07-26 23:00:51.321613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.648 [2024-07-26 23:00:51.321624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76000 len:8 PRP1 0x0 PRP2 0x0 00:31:13.648 [2024-07-26 23:00:51.321636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.648 [2024-07-26 23:00:51.321649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.648 [2024-07-26 23:00:51.321660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.648 [2024-07-26 23:00:51.321672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76008 len:8 PRP1 0x0 PRP2 0x0 00:31:13.648 [2024-07-26 23:00:51.321685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.648 [2024-07-26 23:00:51.321697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.648 [2024-07-26 23:00:51.321708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.648 [2024-07-26 23:00:51.321720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76016 len:8 PRP1 0x0 PRP2 0x0 00:31:13.648 [2024-07-26 23:00:51.321732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.648 [2024-07-26 23:00:51.321745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.648 [2024-07-26 23:00:51.321756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.648 [2024-07-26 23:00:51.321767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76024 len:8 PRP1 0x0 PRP2 0x0 00:31:13.648 [2024-07-26 23:00:51.321780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.648 [2024-07-26 23:00:51.321793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.648 [2024-07-26 23:00:51.321804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.648 [2024-07-26 23:00:51.321819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76032 len:8 PRP1 0x0 PRP2 0x0 00:31:13.648 [2024-07-26 23:00:51.321832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.648 [2024-07-26 23:00:51.321891] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x76cf20 was disconnected and freed. reset controller. 00:31:13.648 [2024-07-26 23:00:51.321909] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:13.648 [2024-07-26 23:00:51.321943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.648 [2024-07-26 23:00:51.321976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.648 [2024-07-26 23:00:51.321992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.648 [2024-07-26 23:00:51.322006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.648 [2024-07-26 23:00:51.322021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.648 [2024-07-26 23:00:51.322035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.648 [2024-07-26 23:00:51.322049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.648 [2024-07-26 23:00:51.322071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.648 [2024-07-26 23:00:51.322086] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.648 [2024-07-26 23:00:51.325333] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.649 [2024-07-26 23:00:51.325371] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74d740 (9): Bad file descriptor 00:31:13.649 [2024-07-26 23:00:51.489653] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:13.649 [2024-07-26 23:00:54.903617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:109432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.649 [2024-07-26 23:00:54.903660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.649 [2024-07-26 23:00:54.903686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.649 [2024-07-26 23:00:54.903703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.649 [2024-07-26 23:00:54.903720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.649 [2024-07-26 23:00:54.903734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.649 [2024-07-26 23:00:54.903751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:109456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.649 [2024-07-26 23:00:54.903767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.649 [2024-07-26 23:00:54.903782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.649 [2024-07-26 23:00:54.903796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.649 [2024-07-26 23:00:54.903811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:109472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.649 [2024-07-26 23:00:54.903831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.649 [2024-07-26 23:00:54.903846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:109480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.649 [2024-07-26 23:00:54.903859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.649 [2024-07-26 23:00:54.903874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:109488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.649 [2024-07-26 23:00:54.903888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.649 [2024-07-26 23:00:54.903903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:109816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.649 [2024-07-26 23:00:54.903916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.649 [2024-07-26 23:00:54.903931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:109824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.649 [2024-07-26 23:00:54.903944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.649 [2024-07-26 23:00:54.903975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:109832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.649 [2024-07-26 23:00:54.903989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.649 [2024-07-26 23:00:54.904004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.649 [2024-07-26 23:00:54.904018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.649 [2024-07-26 23:00:54.904032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.649 [2024-07-26 23:00:54.904069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.649 [2024-07-26 23:00:54.904087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.649 [2024-07-26 23:00:54.904102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.649 [2024-07-26 23:00:54.904117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.649 [2024-07-26 23:00:54.904132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.649 [2024-07-26 23:00:54.904147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.649 [2024-07-26 23:00:54.904162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.649 [2024-07-26 23:00:54.904177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.649 [2024-07-26 23:00:54.904191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.649 [2024-07-26 23:00:54.904207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.649 [2024-07-26 23:00:54.904222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.649 [2024-07-26 23:00:54.904242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.649 [2024-07-26 23:00:54.904256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.649 [2024-07-26 23:00:54.904272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.649 [2024-07-26 23:00:54.904287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.649 [2024-07-26 23:00:54.904303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.649 [2024-07-26 23:00:54.904317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.649 [2024-07-26 23:00:54.904332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.650 [2024-07-26 23:00:54.904358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.650 [2024-07-26 23:00:54.904390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.650 [2024-07-26 23:00:54.904404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.650 [2024-07-26 23:00:54.904425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.650 [2024-07-26 23:00:54.904438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.650 [2024-07-26 23:00:54.904453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.650 [2024-07-26 23:00:54.904468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.650 [2024-07-26 23:00:54.904483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.650 [2024-07-26 23:00:54.904497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.650 [2024-07-26 23:00:54.904512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.650 [2024-07-26 23:00:54.904526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.650 [2024-07-26 23:00:54.904541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.650 [2024-07-26 23:00:54.904555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.650 [2024-07-26 23:00:54.904570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.650 [2024-07-26 23:00:54.904584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.650 [2024-07-26 23:00:54.904599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.650 [2024-07-26 23:00:54.904613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.650 [2024-07-26 23:00:54.904627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.650 [2024-07-26 23:00:54.904648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.650 [2024-07-26 23:00:54.904664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.650 [2024-07-26 23:00:54.904678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.650 [2024-07-26 23:00:54.904693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.650 [2024-07-26 23:00:54.904707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.650 [2024-07-26 23:00:54.904722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.650 [2024-07-26 23:00:54.904735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.650 [2024-07-26 23:00:54.904750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.650 [2024-07-26 23:00:54.904765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.650 [2024-07-26 23:00:54.904780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.650 [2024-07-26 23:00:54.904793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.650 [2024-07-26 23:00:54.904808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.650 [2024-07-26 23:00:54.904822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.650 [2024-07-26 23:00:54.904837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.650 [2024-07-26 23:00:54.904851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.650 [2024-07-26 23:00:54.904866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.650 [2024-07-26 23:00:54.904880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.650 [2024-07-26 23:00:54.904895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.650 [2024-07-26 23:00:54.904909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.650 [2024-07-26 23:00:54.904924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.650 [2024-07-26 23:00:54.904937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.650 [2024-07-26 23:00:54.904952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.650 [2024-07-26 23:00:54.904966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.650 [2024-07-26 23:00:54.904980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.650 [2024-07-26 23:00:54.904994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.650 [2024-07-26 23:00:54.905012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.650 [2024-07-26 23:00:54.905026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.650 [2024-07-26 23:00:54.905042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.650 [2024-07-26 23:00:54.905056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.650 [2024-07-26 23:00:54.905095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.650 [2024-07-26 23:00:54.905111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.650 [2024-07-26 23:00:54.905126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.650 [2024-07-26 23:00:54.905140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.650 [2024-07-26 23:00:54.905156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.650 [2024-07-26 23:00:54.905170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.650 [2024-07-26 23:00:54.905185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.650 [2024-07-26 23:00:54.905199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.650 [2024-07-26 23:00:54.905214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.650 [2024-07-26 23:00:54.905229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.650 [2024-07-26 23:00:54.905244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.650 [2024-07-26 23:00:54.905258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.650 [2024-07-26 23:00:54.905274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.650 [2024-07-26 23:00:54.905288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.650 [2024-07-26 23:00:54.905303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.650 [2024-07-26 23:00:54.905318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.650 [2024-07-26 23:00:54.905334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.651 [2024-07-26 23:00:54.905348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.651 [2024-07-26 23:00:54.905363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.651 [2024-07-26 23:00:54.905400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.651 [2024-07-26 23:00:54.905415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.651 [2024-07-26 23:00:54.905429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.651 [2024-07-26 23:00:54.905448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.651 [2024-07-26 23:00:54.905462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.651 [2024-07-26 23:00:54.905476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.651 [2024-07-26 23:00:54.905490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.651 [2024-07-26 23:00:54.905505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.651 [2024-07-26 23:00:54.905519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.651 [2024-07-26 23:00:54.905534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.651 [2024-07-26 23:00:54.905547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.651 [2024-07-26 23:00:54.905562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.651 [2024-07-26 23:00:54.905576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.651 [2024-07-26 23:00:54.905591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.651 [2024-07-26 23:00:54.905604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.651 [2024-07-26 23:00:54.905619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.651 [2024-07-26 23:00:54.905632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.651 [2024-07-26 23:00:54.905647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.651 [2024-07-26 23:00:54.905661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.651 [2024-07-26 23:00:54.905676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.651 [2024-07-26 23:00:54.905689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.651 [2024-07-26 23:00:54.905704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.651 [2024-07-26 23:00:54.905718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.651 [2024-07-26 23:00:54.905733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.651 [2024-07-26 23:00:54.905747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.651 [2024-07-26 23:00:54.905762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.651 [2024-07-26 23:00:54.905775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.651 [2024-07-26 23:00:54.905789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.651 [2024-07-26 23:00:54.905807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.651 [2024-07-26 23:00:54.905823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.651 [2024-07-26 23:00:54.905837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.651 [2024-07-26 23:00:54.905852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.651 [2024-07-26 23:00:54.905866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.651 [2024-07-26 23:00:54.905881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.651 [2024-07-26 23:00:54.905894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.651 [2024-07-26 23:00:54.905909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.651 [2024-07-26 23:00:54.905923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.651 [2024-07-26 23:00:54.905938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.651 [2024-07-26 23:00:54.905951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.651 [2024-07-26 23:00:54.905966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.651 [2024-07-26 23:00:54.905980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.651 [2024-07-26 23:00:54.905995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.651 [2024-07-26 23:00:54.906008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.651 [2024-07-26 23:00:54.906023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.651 [2024-07-26 23:00:54.906037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.651 [2024-07-26 23:00:54.906056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.651 [2024-07-26 23:00:54.906093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.651 [2024-07-26 23:00:54.906110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.651 [2024-07-26 23:00:54.906124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.651 [2024-07-26 23:00:54.906140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.651 [2024-07-26 23:00:54.906153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.651 [2024-07-26 23:00:54.906169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.651 [2024-07-26 23:00:54.906183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.651 [2024-07-26 23:00:54.906202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.651 [2024-07-26 23:00:54.906216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.651 [2024-07-26 23:00:54.906232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.651 [2024-07-26 23:00:54.906246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.651 [2024-07-26 23:00:54.906261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.651 [2024-07-26 23:00:54.906275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.651 [2024-07-26 23:00:54.906291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.651 [2024-07-26 23:00:54.906305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.652 [2024-07-26 23:00:54.906320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.652 [2024-07-26 23:00:54.906334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.652 [2024-07-26 23:00:54.906350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.652 [2024-07-26 23:00:54.906368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.652 [2024-07-26 23:00:54.906399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.652 [2024-07-26 23:00:54.906413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.652 [2024-07-26 23:00:54.906427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:109496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.652 [2024-07-26 23:00:54.906441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.652 [2024-07-26 23:00:54.906456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:109504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.652 [2024-07-26 23:00:54.906470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.652 [2024-07-26 23:00:54.906485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:109512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.652 [2024-07-26 23:00:54.906498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.652 [2024-07-26 23:00:54.906513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:109520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.652 [2024-07-26 23:00:54.906527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.652 [2024-07-26 23:00:54.906542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:109528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.652 [2024-07-26 23:00:54.906556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.652 [2024-07-26 23:00:54.906571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.652 [2024-07-26 23:00:54.906588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.652 [2024-07-26 23:00:54.906603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:109544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.652 [2024-07-26 23:00:54.906617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.652 [2024-07-26 23:00:54.906633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:109552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.652 [2024-07-26 23:00:54.906647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.652 [2024-07-26 23:00:54.906662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.652 [2024-07-26 23:00:54.906676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.652 [2024-07-26 23:00:54.906691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.652 [2024-07-26 23:00:54.906704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.652 [2024-07-26 23:00:54.906719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.652 [2024-07-26 23:00:54.906733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.652 [2024-07-26 23:00:54.906748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.652 [2024-07-26 23:00:54.906761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.652 [2024-07-26 23:00:54.906776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:109592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.652 [2024-07-26 23:00:54.906790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.652 [2024-07-26 23:00:54.906804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.652 [2024-07-26 23:00:54.906818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.652 [2024-07-26 23:00:54.906833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:109608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.652 [2024-07-26 23:00:54.906846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.652 [2024-07-26 23:00:54.906861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:109616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.652 [2024-07-26 23:00:54.906874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.652 [2024-07-26 23:00:54.906889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:109624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.652 [2024-07-26 23:00:54.906903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.652 [2024-07-26 23:00:54.906918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:109632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.652 [2024-07-26 23:00:54.906931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.652 [2024-07-26 23:00:54.906949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.652 [2024-07-26 23:00:54.906963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.652 [2024-07-26 23:00:54.906978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.652 [2024-07-26 23:00:54.906992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.652 [2024-07-26 23:00:54.907008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.652 [2024-07-26 23:00:54.907022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.652 [2024-07-26 23:00:54.907037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.652 [2024-07-26 23:00:54.907067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.652 [2024-07-26 23:00:54.907101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.652 [2024-07-26 23:00:54.907117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.652 [2024-07-26 23:00:54.907132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.652 [2024-07-26 23:00:54.907147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.652 [2024-07-26 23:00:54.907162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:109688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.652 [2024-07-26 23:00:54.907176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.652 [2024-07-26 23:00:54.907192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:109696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.652 [2024-07-26 23:00:54.907206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.652 [2024-07-26 23:00:54.907221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:109704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.652 [2024-07-26 23:00:54.907235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.652 [2024-07-26 23:00:54.907251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.652 [2024-07-26 23:00:54.907265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.652 [2024-07-26 23:00:54.907280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:109720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.653 [2024-07-26 23:00:54.907295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.653 [2024-07-26 23:00:54.907310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.653 [2024-07-26 23:00:54.907325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.653 [2024-07-26 23:00:54.907340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:109736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.653 [2024-07-26 23:00:54.907358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.653 [2024-07-26 23:00:54.907397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:109744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.653 [2024-07-26 23:00:54.907411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.653 [2024-07-26 23:00:54.907426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.653 [2024-07-26 23:00:54.907440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.653 [2024-07-26 23:00:54.907455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:109760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.653 [2024-07-26 23:00:54.907469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.653 [2024-07-26 23:00:54.907484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:109768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.653 [2024-07-26 23:00:54.907498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.653 [2024-07-26 23:00:54.907513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.653 [2024-07-26 23:00:54.907527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.653 [2024-07-26 23:00:54.907542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.653 [2024-07-26 23:00:54.907556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.653 [2024-07-26 23:00:54.907571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.653 [2024-07-26 23:00:54.907584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.653 [2024-07-26 23:00:54.907599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.653 [2024-07-26 23:00:54.907612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.653 [2024-07-26 23:00:54.907645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.653 [2024-07-26 23:00:54.907660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.653 [2024-07-26 23:00:54.907672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109808 len:8 PRP1 0x0 PRP2 0x0 00:31:13.653 [2024-07-26 23:00:54.907685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.653 [2024-07-26 23:00:54.907746] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x76ef50 was disconnected and freed. reset controller. 00:31:13.653 [2024-07-26 23:00:54.907764] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:31:13.653 [2024-07-26 23:00:54.907796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.653 [2024-07-26 23:00:54.907829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.653 [2024-07-26 23:00:54.907845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.653 [2024-07-26 23:00:54.907859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.653 [2024-07-26 23:00:54.907877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.653 [2024-07-26 23:00:54.907891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.653 [2024-07-26 23:00:54.907905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.653 [2024-07-26 23:00:54.907919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.653 [2024-07-26 23:00:54.907932] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.653 [2024-07-26 23:00:54.907985] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74d740 (9): Bad file descriptor 00:31:13.653 [2024-07-26 23:00:54.911271] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.653 [2024-07-26 23:00:54.943617] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:13.653 [2024-07-26 23:00:59.450315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.653 [2024-07-26 23:00:59.450384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.653 [2024-07-26 23:00:59.450402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.653 [2024-07-26 23:00:59.450416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.653 [2024-07-26 23:00:59.450430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.653 [2024-07-26 23:00:59.450443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.653 [2024-07-26 23:00:59.450458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.653 [2024-07-26 23:00:59.450471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.653 [2024-07-26 23:00:59.450484] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d740 is same with the state(5) to be set 00:31:13.653 [2024-07-26 23:00:59.450543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:60728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.653 [2024-07-26 23:00:59.450563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.653 [2024-07-26 23:00:59.450588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:60736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.653 [2024-07-26 23:00:59.450618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.653 [2024-07-26 23:00:59.450635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:60744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.653 [2024-07-26 23:00:59.450648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.653 [2024-07-26 23:00:59.450663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:60752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.653 [2024-07-26 23:00:59.450676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.653 [2024-07-26 23:00:59.450691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.653 [2024-07-26 23:00:59.450709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.653 [2024-07-26 23:00:59.450724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:60768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.653 [2024-07-26 23:00:59.450737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.653 [2024-07-26 23:00:59.450751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:60776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.653 [2024-07-26 23:00:59.450765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.653 [2024-07-26 23:00:59.450779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.653 [2024-07-26 23:00:59.450792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.653 [2024-07-26 23:00:59.450806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:60792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.654 [2024-07-26 23:00:59.450819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.654 [2024-07-26 23:00:59.450834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:60800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.654 [2024-07-26 23:00:59.450847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.654 [2024-07-26 23:00:59.450861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:60808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.654 [2024-07-26 23:00:59.450874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.654 [2024-07-26 23:00:59.450889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:60816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.654 [2024-07-26 23:00:59.450902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.654 [2024-07-26 23:00:59.450917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:60824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.654 [2024-07-26 23:00:59.450931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.654 [2024-07-26 23:00:59.450945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:60832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.654 [2024-07-26 23:00:59.450958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.654 [2024-07-26 23:00:59.450973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:60840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.654 [2024-07-26 23:00:59.450986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.654 [2024-07-26 23:00:59.451000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.654 [2024-07-26 23:00:59.451013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.654 [2024-07-26 23:00:59.451028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:60856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.654 [2024-07-26 23:00:59.451055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.654 [2024-07-26 23:00:59.451084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:60864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.654 [2024-07-26 23:00:59.451099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.654 [2024-07-26 23:00:59.451130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:60872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.654 [2024-07-26 23:00:59.451144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.654 [2024-07-26 23:00:59.451160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.654 [2024-07-26 23:00:59.451174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.654 [2024-07-26 23:00:59.451189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:60888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.654 [2024-07-26 23:00:59.451203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.654 [2024-07-26 23:00:59.451218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:60896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.654 [2024-07-26 23:00:59.451233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.654 [2024-07-26 23:00:59.451248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:60904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.654 [2024-07-26 23:00:59.451262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.654 [2024-07-26 23:00:59.451277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:60912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.654 [2024-07-26 23:00:59.451291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.654 [2024-07-26 23:00:59.451307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:60920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.654 [2024-07-26 23:00:59.451321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.654 [2024-07-26 23:00:59.451336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:60928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.654 [2024-07-26 23:00:59.451351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.654 [2024-07-26 23:00:59.451383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.654 [2024-07-26 23:00:59.451397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.654 [2024-07-26 23:00:59.451413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:60944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.654 [2024-07-26 23:00:59.451443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.654 [2024-07-26 23:00:59.451460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.654 [2024-07-26 23:00:59.451473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.654 [2024-07-26 23:00:59.451488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:60960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.654 [2024-07-26 23:00:59.451505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.654 [2024-07-26 23:00:59.451520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:60968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.654 [2024-07-26 23:00:59.451533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.654 [2024-07-26 23:00:59.451547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:60976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.654 [2024-07-26 23:00:59.451560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.654 [2024-07-26 23:00:59.451575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:60984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.654 [2024-07-26 23:00:59.451588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.654 [2024-07-26 23:00:59.451603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:60992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.654 [2024-07-26 23:00:59.451616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.654 [2024-07-26 23:00:59.451630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:61000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.654 [2024-07-26 23:00:59.451643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.654 [2024-07-26 23:00:59.451658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.654 [2024-07-26 23:00:59.451671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.654 [2024-07-26 23:00:59.451685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:61016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.654 [2024-07-26 23:00:59.451698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.654 [2024-07-26 23:00:59.451712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:61024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.654 [2024-07-26 23:00:59.451727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.654 [2024-07-26 23:00:59.451741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:61032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.654 [2024-07-26 23:00:59.451754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.654 [2024-07-26 23:00:59.451769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.654 [2024-07-26 23:00:59.451782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.655 [2024-07-26 23:00:59.451797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:61048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.655 [2024-07-26 23:00:59.451810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.655 [2024-07-26 23:00:59.451824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.655 [2024-07-26 23:00:59.451837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.655 [2024-07-26 23:00:59.451855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.655 [2024-07-26 23:00:59.451869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.655 [2024-07-26 23:00:59.451884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.655 [2024-07-26 23:00:59.451897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.655 [2024-07-26 23:00:59.451912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:61080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.655 [2024-07-26 23:00:59.451926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.655 [2024-07-26 23:00:59.451940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:61088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.655 [2024-07-26 23:00:59.451953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.655 [2024-07-26 23:00:59.451968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:61096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.655 [2024-07-26 23:00:59.451981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.655 [2024-07-26 23:00:59.451996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.655 [2024-07-26 23:00:59.452009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.655 [2024-07-26 23:00:59.452023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.655 [2024-07-26 23:00:59.452036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.655 [2024-07-26 23:00:59.452051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.655 [2024-07-26 23:00:59.452089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.655 [2024-07-26 23:00:59.452107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:61200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.655 [2024-07-26 23:00:59.452121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.655 [2024-07-26 23:00:59.452136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.655 [2024-07-26 23:00:59.452150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.655 [2024-07-26 23:00:59.452165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.655 [2024-07-26 23:00:59.452180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.655 [2024-07-26 23:00:59.452194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.655 [2024-07-26 23:00:59.452208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.655 [2024-07-26 23:00:59.452223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.655 [2024-07-26 23:00:59.452237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.655 [2024-07-26 23:00:59.452256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.655 [2024-07-26 23:00:59.452270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.655 [2024-07-26 23:00:59.452285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:61248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.655 [2024-07-26 23:00:59.452299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.655 [2024-07-26 23:00:59.452314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.655 [2024-07-26 23:00:59.452328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.655 [2024-07-26 23:00:59.452343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.655 [2024-07-26 23:00:59.452356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.655 [2024-07-26 23:00:59.452387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.655 [2024-07-26 23:00:59.452401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.655 [2024-07-26 23:00:59.452416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.655 [2024-07-26 23:00:59.452429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.655 [2024-07-26 23:00:59.452444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.655 [2024-07-26 23:00:59.452457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.655 [2024-07-26 23:00:59.452472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:61296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.655 [2024-07-26 23:00:59.452486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.656 [2024-07-26 23:00:59.452500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.656 [2024-07-26 23:00:59.452513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.656 [2024-07-26 23:00:59.452528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:61312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.656 [2024-07-26 23:00:59.452542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.656 [2024-07-26 23:00:59.452556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:61320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.656 [2024-07-26 23:00:59.452569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.656 [2024-07-26 23:00:59.452584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:61328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.656 [2024-07-26 23:00:59.452597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.656 [2024-07-26 23:00:59.452612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.656 [2024-07-26 23:00:59.452628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.656 [2024-07-26 23:00:59.452643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:61344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.656 [2024-07-26 23:00:59.452657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.656 [2024-07-26 23:00:59.452671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:61352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.656 [2024-07-26 23:00:59.452684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.656 [2024-07-26 23:00:59.452699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:61360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.656 [2024-07-26 23:00:59.452712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.656 [2024-07-26 23:00:59.452727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.656 [2024-07-26 23:00:59.452740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.656 [2024-07-26 23:00:59.452755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.656 [2024-07-26 23:00:59.452768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.656 [2024-07-26 23:00:59.452783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:61384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.656 [2024-07-26 23:00:59.452797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.656 [2024-07-26 23:00:59.452811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.656 [2024-07-26 23:00:59.452824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.656 [2024-07-26 23:00:59.452839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:61400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.656 [2024-07-26 23:00:59.452852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.656 [2024-07-26 23:00:59.452867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.656 [2024-07-26 23:00:59.452886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.656 [2024-07-26 23:00:59.452901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.656 [2024-07-26 23:00:59.452915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.656 [2024-07-26 23:00:59.452929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.656 [2024-07-26 23:00:59.452943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.656 [2024-07-26 23:00:59.452957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:61128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.656 [2024-07-26 23:00:59.452970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.656 [2024-07-26 23:00:59.452992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:61136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.656 [2024-07-26 23:00:59.453006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.656 [2024-07-26 23:00:59.453020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:61144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.656 [2024-07-26 23:00:59.453034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.656 [2024-07-26 23:00:59.453049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:61152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.656 [2024-07-26 23:00:59.453083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.656 [2024-07-26 23:00:59.453101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:61160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.656 [2024-07-26 23:00:59.453115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.656 [2024-07-26 23:00:59.453130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:61408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.656 [2024-07-26 23:00:59.453144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.656 [2024-07-26 23:00:59.453159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.656 [2024-07-26 23:00:59.453173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.656 [2024-07-26 23:00:59.453187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.656 [2024-07-26 23:00:59.453201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.656 [2024-07-26 23:00:59.453216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.656 [2024-07-26 23:00:59.453229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.656 [2024-07-26 23:00:59.453244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.656 [2024-07-26 23:00:59.453258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.656 [2024-07-26 23:00:59.453273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.656 [2024-07-26 23:00:59.453287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.656 [2024-07-26 23:00:59.453302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.656 [2024-07-26 23:00:59.453315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.656 [2024-07-26 23:00:59.453330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.656 [2024-07-26 23:00:59.453344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.656 [2024-07-26 23:00:59.453359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.656 [2024-07-26 23:00:59.453392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.656 [2024-07-26 23:00:59.453409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.656 [2024-07-26 23:00:59.453422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.656 [2024-07-26 23:00:59.453453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.657 [2024-07-26 23:00:59.453467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.657 [2024-07-26 23:00:59.453482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.657 [2024-07-26 23:00:59.453496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.657 [2024-07-26 23:00:59.453510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.657 [2024-07-26 23:00:59.453524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.657 [2024-07-26 23:00:59.453539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.657 [2024-07-26 23:00:59.453552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.657 [2024-07-26 23:00:59.453567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.657 [2024-07-26 23:00:59.453582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.657 [2024-07-26 23:00:59.453596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.657 [2024-07-26 23:00:59.453610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.657 [2024-07-26 23:00:59.453625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.657 [2024-07-26 23:00:59.453638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.657 [2024-07-26 23:00:59.453653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.657 [2024-07-26 23:00:59.453667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.657 [2024-07-26 23:00:59.453682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.657 [2024-07-26 23:00:59.453696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.657 [2024-07-26 23:00:59.453710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.657 [2024-07-26 23:00:59.453724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.657 [2024-07-26 23:00:59.453739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.657 [2024-07-26 23:00:59.453752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.657 [2024-07-26 23:00:59.453767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.657 [2024-07-26 23:00:59.453785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.657 [2024-07-26 23:00:59.453800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.657 [2024-07-26 23:00:59.453814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.657 [2024-07-26 23:00:59.453828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.657 [2024-07-26 23:00:59.453842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.657 [2024-07-26 23:00:59.453858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.657 [2024-07-26 23:00:59.453872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.657 [2024-07-26 23:00:59.453887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.657 [2024-07-26 23:00:59.453900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.657 [2024-07-26 23:00:59.453915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.657 [2024-07-26 23:00:59.453929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.657 [2024-07-26 23:00:59.453944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.657 [2024-07-26 23:00:59.453958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.657 [2024-07-26 23:00:59.453973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.657 [2024-07-26 23:00:59.453987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.657 [2024-07-26 23:00:59.454001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.657 [2024-07-26 23:00:59.454015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.657 [2024-07-26 23:00:59.454030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.657 [2024-07-26 23:00:59.454044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.657 [2024-07-26 23:00:59.454064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.657 [2024-07-26 23:00:59.454096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.657 [2024-07-26 23:00:59.454112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.657 [2024-07-26 23:00:59.454127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.657 [2024-07-26 23:00:59.454142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.657 [2024-07-26 23:00:59.454156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.657 [2024-07-26 23:00:59.454175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.657 [2024-07-26 23:00:59.454190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.657 [2024-07-26 23:00:59.454206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.657 [2024-07-26 23:00:59.454220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.657 [2024-07-26 23:00:59.454235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.657 [2024-07-26 23:00:59.454249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.657 [2024-07-26 23:00:59.454265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.657 [2024-07-26 23:00:59.454280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.657 [2024-07-26 23:00:59.454295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.657 [2024-07-26 23:00:59.454310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.657 [2024-07-26 23:00:59.454330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.657 [2024-07-26 23:00:59.454346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.657 [2024-07-26 23:00:59.454377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.657 [2024-07-26 23:00:59.454392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.657 [2024-07-26 23:00:59.454407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.657 [2024-07-26 23:00:59.454421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.657 [2024-07-26 23:00:59.454436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.658 [2024-07-26 23:00:59.454449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.658 [2024-07-26 23:00:59.454483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.658 [2024-07-26 23:00:59.454498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.658 [2024-07-26 23:00:59.454510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61168 len:8 PRP1 0x0 PRP2 0x0 00:31:13.658 [2024-07-26 23:00:59.454523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.658 [2024-07-26 23:00:59.454580] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x770da0 was disconnected and freed. reset controller. 00:31:13.658 [2024-07-26 23:00:59.454598] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:31:13.658 [2024-07-26 23:00:59.454613] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.658 [2024-07-26 23:00:59.457868] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.658 [2024-07-26 23:00:59.457908] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74d740 (9): Bad file descriptor 00:31:13.658 [2024-07-26 23:00:59.532509] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:13.658 00:31:13.658 Latency(us) 00:31:13.658 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:13.658 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:13.658 Verification LBA range: start 0x0 length 0x4000 00:31:13.658 NVMe0n1 : 15.00 8857.61 34.60 700.40 0.00 13364.66 788.86 18544.26 00:31:13.658 =================================================================================================================== 00:31:13.658 Total : 8857.61 34.60 700.40 0.00 13364.66 788.86 18544.26 00:31:13.658 Received shutdown signal, test time was about 15.000000 seconds 00:31:13.658 00:31:13.658 Latency(us) 00:31:13.658 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:13.658 =================================================================================================================== 00:31:13.658 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:13.658 23:01:05 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:31:13.658 23:01:05 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:31:13.658 23:01:05 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:31:13.658 23:01:05 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3660048 00:31:13.658 23:01:05 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:31:13.658 23:01:05 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3660048 /var/tmp/bdevperf.sock 00:31:13.658 23:01:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3660048 ']' 00:31:13.658 23:01:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:13.658 23:01:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:13.658 23:01:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:13.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:13.658 23:01:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:13.658 23:01:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:13.658 23:01:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:13.658 23:01:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:31:13.658 23:01:05 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:13.658 [2024-07-26 23:01:06.018634] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:13.658 23:01:06 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:13.916 [2024-07-26 23:01:06.267289] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:13.916 23:01:06 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:14.174 NVMe0n1 00:31:14.174 23:01:06 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:14.740 00:31:14.740 23:01:06 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:14.999 00:31:14.999 23:01:07 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:14.999 23:01:07 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:31:15.256 23:01:07 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:15.516 23:01:07 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:31:18.801 23:01:10 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:18.801 23:01:10 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:31:18.801 23:01:11 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3660717 00:31:18.801 23:01:11 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:18.801 23:01:11 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 3660717 00:31:20.178 0 00:31:20.178 23:01:12 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:20.178 [2024-07-26 23:01:05.534680] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:31:20.178 [2024-07-26 23:01:05.534776] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3660048 ] 00:31:20.178 EAL: No free 2048 kB hugepages reported on node 1 00:31:20.178 [2024-07-26 23:01:05.594832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:20.178 [2024-07-26 23:01:05.677980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:20.178 [2024-07-26 23:01:07.880455] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:20.178 [2024-07-26 23:01:07.880550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:20.178 [2024-07-26 23:01:07.880573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.178 [2024-07-26 23:01:07.880590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:20.178 [2024-07-26 23:01:07.880620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.178 [2024-07-26 23:01:07.880635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:20.178 [2024-07-26 23:01:07.880650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.178 [2024-07-26 23:01:07.880664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:20.178 [2024-07-26 23:01:07.880678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.178 [2024-07-26 23:01:07.880693] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:20.178 [2024-07-26 23:01:07.880738] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:20.178 [2024-07-26 23:01:07.880770] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10e0740 (9): Bad file descriptor 00:31:20.178 [2024-07-26 23:01:07.932845] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:20.178 Running I/O for 1 seconds... 00:31:20.178 00:31:20.178 Latency(us) 00:31:20.178 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:20.178 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:20.178 Verification LBA range: start 0x0 length 0x4000 00:31:20.178 NVMe0n1 : 1.01 8918.06 34.84 0.00 0.00 14292.18 2196.67 16408.27 00:31:20.178 =================================================================================================================== 00:31:20.178 Total : 8918.06 34.84 0.00 0.00 14292.18 2196.67 16408.27 00:31:20.178 23:01:12 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:20.178 23:01:12 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:31:20.178 23:01:12 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:20.436 23:01:12 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:20.436 23:01:12 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:31:20.693 23:01:13 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:20.950 23:01:13 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:31:24.236 23:01:16 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:24.236 23:01:16 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:31:24.236 23:01:16 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 3660048 00:31:24.236 23:01:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3660048 ']' 00:31:24.236 23:01:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3660048 00:31:24.236 23:01:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:31:24.236 23:01:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:24.236 23:01:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3660048 00:31:24.236 23:01:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:24.236 23:01:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:24.236 23:01:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3660048' 00:31:24.236 killing process with pid 3660048 00:31:24.236 23:01:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3660048 00:31:24.236 23:01:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3660048 00:31:24.494 23:01:16 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:31:24.494 23:01:16 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:24.753 23:01:17 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:24.753 23:01:17 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:24.753 23:01:17 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:31:24.753 23:01:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:24.753 23:01:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:31:24.753 23:01:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:24.753 23:01:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:31:24.753 23:01:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:24.753 23:01:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:24.753 rmmod nvme_tcp 00:31:24.753 rmmod nvme_fabrics 00:31:24.753 rmmod nvme_keyring 00:31:24.753 23:01:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:24.753 23:01:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:31:24.753 23:01:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:31:24.753 23:01:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3657789 ']' 00:31:24.753 23:01:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3657789 00:31:24.753 23:01:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3657789 ']' 00:31:24.753 23:01:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3657789 00:31:24.753 23:01:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:31:24.753 23:01:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:24.753 23:01:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3657789 00:31:24.753 23:01:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:24.753 23:01:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:24.753 23:01:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3657789' 00:31:24.753 killing process with pid 3657789 00:31:24.753 23:01:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3657789 00:31:24.753 23:01:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3657789 00:31:25.011 23:01:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:25.011 23:01:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:25.011 23:01:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:25.011 23:01:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:25.011 23:01:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:25.011 23:01:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.011 23:01:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:25.011 23:01:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.548 23:01:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:27.548 00:31:27.548 real 0m35.012s 00:31:27.548 user 2m3.615s 00:31:27.548 sys 0m5.871s 00:31:27.548 23:01:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:27.548 23:01:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:27.548 ************************************ 00:31:27.548 END TEST nvmf_failover 00:31:27.548 ************************************ 00:31:27.548 23:01:19 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:27.548 23:01:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:27.548 23:01:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:27.548 23:01:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:27.548 ************************************ 00:31:27.548 START TEST nvmf_host_discovery 00:31:27.548 ************************************ 00:31:27.548 23:01:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:27.548 * Looking for test storage... 00:31:27.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:27.548 23:01:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:27.548 23:01:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:31:27.548 23:01:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:27.548 23:01:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:27.548 23:01:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:27.548 23:01:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:27.548 23:01:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:27.548 23:01:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:27.548 23:01:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:27.548 23:01:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:27.548 23:01:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:27.548 23:01:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:27.548 23:01:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:27.548 23:01:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:27.548 23:01:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:27.548 23:01:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:27.548 23:01:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:27.548 23:01:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:27.548 23:01:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:27.548 23:01:19 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:27.548 23:01:19 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:27.548 23:01:19 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:27.548 23:01:19 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.548 23:01:19 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.548 23:01:19 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.548 23:01:19 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:31:27.548 23:01:19 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.549 23:01:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:31:27.549 23:01:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:27.549 23:01:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:27.549 23:01:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:27.549 23:01:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:27.549 23:01:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:27.549 23:01:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:27.549 23:01:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:27.549 23:01:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:27.549 23:01:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:27.549 23:01:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:27.549 23:01:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:27.549 23:01:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:27.549 23:01:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:27.549 23:01:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:27.549 23:01:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:31:27.549 23:01:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:27.549 23:01:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:27.549 23:01:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:27.549 23:01:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:27.549 23:01:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:27.549 23:01:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:27.549 23:01:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:27.549 23:01:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.549 23:01:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:27.549 23:01:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:27.549 23:01:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:31:27.549 23:01:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:29.456 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:29.456 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:29.456 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:29.456 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:29.457 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:29.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:29.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:31:29.457 00:31:29.457 --- 10.0.0.2 ping statistics --- 00:31:29.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:29.457 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:29.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:29.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:31:29.457 00:31:29.457 --- 10.0.0.1 ping statistics --- 00:31:29.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:29.457 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3663310 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3663310 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 3663310 ']' 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:29.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.457 [2024-07-26 23:01:21.657231] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:31:29.457 [2024-07-26 23:01:21.657306] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:29.457 EAL: No free 2048 kB hugepages reported on node 1 00:31:29.457 [2024-07-26 23:01:21.724873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:29.457 [2024-07-26 23:01:21.816332] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:29.457 [2024-07-26 23:01:21.816403] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:29.457 [2024-07-26 23:01:21.816420] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:29.457 [2024-07-26 23:01:21.816434] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:29.457 [2024-07-26 23:01:21.816445] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:29.457 [2024-07-26 23:01:21.816486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.457 23:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.457 [2024-07-26 23:01:21.956045] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:29.716 23:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.716 23:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:29.716 23:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.716 23:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.716 [2024-07-26 23:01:21.964246] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:29.716 23:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.716 23:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:29.716 23:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.716 23:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.716 null0 00:31:29.716 23:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.716 23:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:29.716 23:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.716 23:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.716 null1 00:31:29.716 23:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.716 23:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:29.716 23:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.716 23:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.716 23:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.716 23:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3663337 00:31:29.716 23:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:29.716 23:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3663337 /tmp/host.sock 00:31:29.716 23:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 3663337 ']' 00:31:29.716 23:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:31:29.716 23:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:29.716 23:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:29.716 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:29.716 23:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:29.716 23:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.716 [2024-07-26 23:01:22.035663] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:31:29.716 [2024-07-26 23:01:22.035729] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3663337 ] 00:31:29.716 EAL: No free 2048 kB hugepages reported on node 1 00:31:29.716 [2024-07-26 23:01:22.095031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:29.716 [2024-07-26 23:01:22.185266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:29.975 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.976 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:29.976 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:31:29.976 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:29.976 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:29.976 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.976 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.976 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:29.976 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:29.976 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.234 [2024-07-26 23:01:22.589924] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:30.234 23:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:30.491 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.491 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:31:30.491 23:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:31.088 [2024-07-26 23:01:23.324141] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:31.088 [2024-07-26 23:01:23.324184] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:31.088 [2024-07-26 23:01:23.324209] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:31.088 [2024-07-26 23:01:23.410497] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:31.368 [2024-07-26 23:01:23.597404] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:31.368 [2024-07-26 23:01:23.597449] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:31.368 23:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:31.627 23:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:31.627 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:31.627 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:31.627 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:31.627 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:31.627 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:31.627 23:01:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:31.627 23:01:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:31.627 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:31.628 [2024-07-26 23:01:24.050152] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:31.628 [2024-07-26 23:01:24.050770] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:31.628 [2024-07-26 23:01:24.050809] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:31.628 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.888 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:31.888 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:31.888 23:01:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:31.888 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:31.888 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:31.888 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:31.889 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:31.889 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:31.889 23:01:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:31.889 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.889 23:01:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:31.889 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:31.889 23:01:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:31.889 23:01:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:31.889 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.889 [2024-07-26 23:01:24.179235] bdev_nvme.c:6908:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:31.889 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:31:31.889 23:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:31.889 [2024-07-26 23:01:24.236762] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:31.889 [2024-07-26 23:01:24.236790] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:31.889 [2024-07-26 23:01:24.236801] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.828 [2024-07-26 23:01:25.278335] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:32.828 [2024-07-26 23:01:25.278373] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:32.828 [2024-07-26 23:01:25.278853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.828 [2024-07-26 23:01:25.278886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.828 [2024-07-26 23:01:25.278918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.828 [2024-07-26 23:01:25.278932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.828 [2024-07-26 23:01:25.278946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.828 [2024-07-26 23:01:25.278960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.828 [2024-07-26 23:01:25.278974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.828 [2024-07-26 23:01:25.278988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.828 [2024-07-26 23:01:25.279002] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1cda0 is same with the state(5) to be set 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:32.828 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.829 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:32.829 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.829 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:32.829 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:32.829 [2024-07-26 23:01:25.288840] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1cda0 (9): Bad file descriptor 00:31:32.829 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.829 [2024-07-26 23:01:25.298883] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:32.829 [2024-07-26 23:01:25.299242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.829 [2024-07-26 23:01:25.299273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a1cda0 with addr=10.0.0.2, port=4420 00:31:32.829 [2024-07-26 23:01:25.299290] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1cda0 is same with the state(5) to be set 00:31:32.829 [2024-07-26 23:01:25.299314] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1cda0 (9): Bad file descriptor 00:31:32.829 [2024-07-26 23:01:25.299337] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:32.829 [2024-07-26 23:01:25.299363] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:32.829 [2024-07-26 23:01:25.299398] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:32.829 [2024-07-26 23:01:25.299418] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:32.829 [2024-07-26 23:01:25.308958] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:32.829 [2024-07-26 23:01:25.309218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.829 [2024-07-26 23:01:25.309247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a1cda0 with addr=10.0.0.2, port=4420 00:31:32.829 [2024-07-26 23:01:25.309264] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1cda0 is same with the state(5) to be set 00:31:32.829 [2024-07-26 23:01:25.309286] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1cda0 (9): Bad file descriptor 00:31:32.829 [2024-07-26 23:01:25.309308] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:32.829 [2024-07-26 23:01:25.309322] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:32.829 [2024-07-26 23:01:25.309336] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:32.829 [2024-07-26 23:01:25.309354] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:32.829 [2024-07-26 23:01:25.319027] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:32.829 [2024-07-26 23:01:25.319264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.829 [2024-07-26 23:01:25.319292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a1cda0 with addr=10.0.0.2, port=4420 00:31:32.829 [2024-07-26 23:01:25.319309] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1cda0 is same with the state(5) to be set 00:31:32.829 [2024-07-26 23:01:25.319330] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1cda0 (9): Bad file descriptor 00:31:32.829 [2024-07-26 23:01:25.319377] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:32.829 [2024-07-26 23:01:25.319391] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:32.829 [2024-07-26 23:01:25.319405] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:32.829 [2024-07-26 23:01:25.319423] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:32.829 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:32.829 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:32.829 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:32.829 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:32.829 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:32.829 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:32.829 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:32.829 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:32.829 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:32.829 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.829 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:32.829 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.829 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:32.829 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:32.829 [2024-07-26 23:01:25.329124] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:32.829 [2024-07-26 23:01:25.329370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.829 [2024-07-26 23:01:25.329399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a1cda0 with addr=10.0.0.2, port=4420 00:31:32.829 [2024-07-26 23:01:25.329416] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1cda0 is same with the state(5) to be set 00:31:32.829 [2024-07-26 23:01:25.329439] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1cda0 (9): Bad file descriptor 00:31:32.829 [2024-07-26 23:01:25.329460] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:32.829 [2024-07-26 23:01:25.329475] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:32.829 [2024-07-26 23:01:25.329489] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:32.829 [2024-07-26 23:01:25.329508] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:33.088 [2024-07-26 23:01:25.339202] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:33.088 [2024-07-26 23:01:25.339435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.088 [2024-07-26 23:01:25.339464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a1cda0 with addr=10.0.0.2, port=4420 00:31:33.088 [2024-07-26 23:01:25.339486] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1cda0 is same with the state(5) to be set 00:31:33.088 [2024-07-26 23:01:25.339510] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1cda0 (9): Bad file descriptor 00:31:33.088 [2024-07-26 23:01:25.339531] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:33.088 [2024-07-26 23:01:25.339546] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:33.088 [2024-07-26 23:01:25.339560] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:33.088 [2024-07-26 23:01:25.339579] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:33.088 [2024-07-26 23:01:25.349292] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:33.088 [2024-07-26 23:01:25.349497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.088 [2024-07-26 23:01:25.349524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a1cda0 with addr=10.0.0.2, port=4420 00:31:33.088 [2024-07-26 23:01:25.349540] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1cda0 is same with the state(5) to be set 00:31:33.088 [2024-07-26 23:01:25.349562] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1cda0 (9): Bad file descriptor 00:31:33.088 [2024-07-26 23:01:25.349583] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:33.089 [2024-07-26 23:01:25.349597] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:33.089 [2024-07-26 23:01:25.349611] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:33.089 [2024-07-26 23:01:25.349630] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.089 [2024-07-26 23:01:25.359363] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:33.089 [2024-07-26 23:01:25.359643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.089 [2024-07-26 23:01:25.359670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a1cda0 with addr=10.0.0.2, port=4420 00:31:33.089 [2024-07-26 23:01:25.359686] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1cda0 is same with the state(5) to be set 00:31:33.089 [2024-07-26 23:01:25.359708] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1cda0 (9): Bad file descriptor 00:31:33.089 [2024-07-26 23:01:25.359748] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:33.089 [2024-07-26 23:01:25.359766] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:33.089 [2024-07-26 23:01:25.359779] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:33.089 [2024-07-26 23:01:25.359798] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:33.089 [2024-07-26 23:01:25.364752] bdev_nvme.c:6771:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:33.089 [2024-07-26 23:01:25.364781] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:33.089 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.347 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:31:33.347 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:31:33.347 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:33.347 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:33.347 23:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:33.347 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.347 23:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.278 [2024-07-26 23:01:26.653126] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:34.278 [2024-07-26 23:01:26.653163] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:34.278 [2024-07-26 23:01:26.653186] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:34.278 [2024-07-26 23:01:26.740496] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:34.845 [2024-07-26 23:01:27.051608] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:34.845 [2024-07-26 23:01:27.051664] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:34.845 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.845 23:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:34.845 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:34.845 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:34.845 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:34.845 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:34.845 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:34.845 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:34.845 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:34.845 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.845 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.845 request: 00:31:34.845 { 00:31:34.845 "name": "nvme", 00:31:34.845 "trtype": "tcp", 00:31:34.845 "traddr": "10.0.0.2", 00:31:34.845 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:34.845 "adrfam": "ipv4", 00:31:34.845 "trsvcid": "8009", 00:31:34.845 "wait_for_attach": true, 00:31:34.845 "method": "bdev_nvme_start_discovery", 00:31:34.845 "req_id": 1 00:31:34.845 } 00:31:34.845 Got JSON-RPC error response 00:31:34.845 response: 00:31:34.845 { 00:31:34.845 "code": -17, 00:31:34.845 "message": "File exists" 00:31:34.845 } 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.846 request: 00:31:34.846 { 00:31:34.846 "name": "nvme_second", 00:31:34.846 "trtype": "tcp", 00:31:34.846 "traddr": "10.0.0.2", 00:31:34.846 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:34.846 "adrfam": "ipv4", 00:31:34.846 "trsvcid": "8009", 00:31:34.846 "wait_for_attach": true, 00:31:34.846 "method": "bdev_nvme_start_discovery", 00:31:34.846 "req_id": 1 00:31:34.846 } 00:31:34.846 Got JSON-RPC error response 00:31:34.846 response: 00:31:34.846 { 00:31:34.846 "code": -17, 00:31:34.846 "message": "File exists" 00:31:34.846 } 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.846 23:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:35.780 [2024-07-26 23:01:28.242965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.780 [2024-07-26 23:01:28.243012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4e5f0 with addr=10.0.0.2, port=8010 00:31:35.780 [2024-07-26 23:01:28.243038] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:35.780 [2024-07-26 23:01:28.243078] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:35.780 [2024-07-26 23:01:28.243091] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:37.157 [2024-07-26 23:01:29.245449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.157 [2024-07-26 23:01:29.245497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4e980 with addr=10.0.0.2, port=8010 00:31:37.157 [2024-07-26 23:01:29.245524] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:37.157 [2024-07-26 23:01:29.245538] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:37.157 [2024-07-26 23:01:29.245552] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:38.093 [2024-07-26 23:01:30.247647] bdev_nvme.c:7027:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:38.093 request: 00:31:38.093 { 00:31:38.093 "name": "nvme_second", 00:31:38.093 "trtype": "tcp", 00:31:38.093 "traddr": "10.0.0.2", 00:31:38.093 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:38.093 "adrfam": "ipv4", 00:31:38.093 "trsvcid": "8010", 00:31:38.093 "attach_timeout_ms": 3000, 00:31:38.093 "method": "bdev_nvme_start_discovery", 00:31:38.093 "req_id": 1 00:31:38.093 } 00:31:38.093 Got JSON-RPC error response 00:31:38.093 response: 00:31:38.093 { 00:31:38.093 "code": -110, 00:31:38.093 "message": "Connection timed out" 00:31:38.093 } 00:31:38.093 23:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:38.093 23:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:38.093 23:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:38.093 23:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:38.093 23:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:38.093 23:01:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:31:38.093 23:01:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:38.093 23:01:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:38.093 23:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.093 23:01:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:38.093 23:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:38.093 23:01:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:38.093 23:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.093 23:01:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:31:38.093 23:01:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:31:38.093 23:01:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3663337 00:31:38.093 23:01:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:31:38.094 23:01:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:38.094 23:01:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:31:38.094 23:01:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:38.094 23:01:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:31:38.094 23:01:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:38.094 23:01:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:38.094 rmmod nvme_tcp 00:31:38.094 rmmod nvme_fabrics 00:31:38.094 rmmod nvme_keyring 00:31:38.094 23:01:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:38.094 23:01:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:31:38.094 23:01:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:31:38.094 23:01:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3663310 ']' 00:31:38.094 23:01:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3663310 00:31:38.094 23:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 3663310 ']' 00:31:38.094 23:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 3663310 00:31:38.094 23:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:31:38.094 23:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:38.094 23:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3663310 00:31:38.094 23:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:38.094 23:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:38.094 23:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3663310' 00:31:38.094 killing process with pid 3663310 00:31:38.094 23:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 3663310 00:31:38.094 23:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 3663310 00:31:38.354 23:01:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:38.354 23:01:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:38.354 23:01:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:38.354 23:01:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:38.354 23:01:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:38.354 23:01:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:38.354 23:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:38.354 23:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.258 23:01:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:40.258 00:31:40.258 real 0m13.146s 00:31:40.258 user 0m19.202s 00:31:40.258 sys 0m2.697s 00:31:40.258 23:01:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:40.258 23:01:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:40.258 ************************************ 00:31:40.258 END TEST nvmf_host_discovery 00:31:40.258 ************************************ 00:31:40.258 23:01:32 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:40.258 23:01:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:40.258 23:01:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:40.258 23:01:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:40.258 ************************************ 00:31:40.258 START TEST nvmf_host_multipath_status 00:31:40.258 ************************************ 00:31:40.258 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:40.516 * Looking for test storage... 00:31:40.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:40.516 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:40.516 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:31:40.516 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:40.516 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:40.516 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:40.516 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:40.516 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:40.516 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:40.516 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:40.516 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:40.516 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:40.516 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:40.516 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:40.516 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:40.516 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:40.516 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:40.516 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:40.516 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:40.517 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:40.517 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:40.517 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:40.517 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:40.517 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.517 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.517 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.517 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:31:40.517 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.517 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:31:40.517 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:40.517 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:40.517 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:40.517 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:40.517 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:40.517 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:40.517 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:40.517 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:40.517 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:40.517 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:40.517 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:40.517 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:31:40.517 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:40.517 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:40.517 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:31:40.517 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:40.517 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:40.517 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:40.517 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:40.517 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:40.517 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:40.517 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:40.517 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.517 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:40.517 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:40.517 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:31:40.517 23:01:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:42.418 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:42.419 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:42.419 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:42.419 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:42.419 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:42.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:42.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:31:42.419 00:31:42.419 --- 10.0.0.2 ping statistics --- 00:31:42.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:42.419 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:42.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:42.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:31:42.419 00:31:42.419 --- 10.0.0.1 ping statistics --- 00:31:42.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:42.419 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:42.419 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:42.420 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:42.420 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:42.420 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:31:42.420 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:42.420 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:42.420 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:42.420 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3666367 00:31:42.420 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:42.420 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3666367 00:31:42.420 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 3666367 ']' 00:31:42.420 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:42.420 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:42.420 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:42.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:42.420 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:42.420 23:01:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:42.678 [2024-07-26 23:01:34.933715] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:31:42.678 [2024-07-26 23:01:34.933789] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:42.678 EAL: No free 2048 kB hugepages reported on node 1 00:31:42.678 [2024-07-26 23:01:35.003648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:42.678 [2024-07-26 23:01:35.096779] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:42.678 [2024-07-26 23:01:35.096846] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:42.678 [2024-07-26 23:01:35.096863] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:42.678 [2024-07-26 23:01:35.096877] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:42.678 [2024-07-26 23:01:35.096889] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:42.678 [2024-07-26 23:01:35.096991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:42.678 [2024-07-26 23:01:35.096997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:42.936 23:01:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:42.936 23:01:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:31:42.936 23:01:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:42.936 23:01:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:42.936 23:01:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:42.936 23:01:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:42.936 23:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3666367 00:31:42.936 23:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:43.194 [2024-07-26 23:01:35.515327] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:43.194 23:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:43.451 Malloc0 00:31:43.452 23:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:43.709 23:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:43.967 23:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:44.225 [2024-07-26 23:01:36.537649] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:44.225 23:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:44.483 [2024-07-26 23:01:36.774245] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:44.483 23:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3666650 00:31:44.483 23:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:44.483 23:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:44.483 23:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3666650 /var/tmp/bdevperf.sock 00:31:44.483 23:01:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 3666650 ']' 00:31:44.483 23:01:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:44.483 23:01:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:44.483 23:01:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:44.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:44.483 23:01:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:44.483 23:01:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:44.741 23:01:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:44.741 23:01:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:31:44.741 23:01:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:44.999 23:01:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:31:45.257 Nvme0n1 00:31:45.257 23:01:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:45.823 Nvme0n1 00:31:45.823 23:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:45.823 23:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:48.351 23:01:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:48.352 23:01:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:48.352 23:01:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:48.352 23:01:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:49.290 23:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:49.290 23:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:49.290 23:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.290 23:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:49.593 23:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:49.593 23:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:49.593 23:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.593 23:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:49.852 23:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:49.852 23:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:49.852 23:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.852 23:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:50.109 23:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.109 23:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:50.109 23:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.109 23:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:50.367 23:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.367 23:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:50.367 23:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.367 23:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:50.625 23:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.625 23:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:50.625 23:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.625 23:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:50.884 23:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.884 23:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:50.884 23:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:51.141 23:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:51.398 23:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:52.330 23:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:52.330 23:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:52.330 23:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.330 23:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:52.588 23:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:52.588 23:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:52.588 23:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.588 23:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:52.846 23:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:52.846 23:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:52.846 23:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.846 23:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:53.103 23:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.103 23:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:53.103 23:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.103 23:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:53.360 23:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.360 23:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:53.360 23:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.360 23:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:53.617 23:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.617 23:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:53.617 23:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.617 23:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:53.874 23:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.874 23:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:53.874 23:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:54.132 23:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:54.391 23:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:55.325 23:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:55.325 23:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:55.325 23:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.325 23:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:55.583 23:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:55.583 23:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:55.583 23:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.583 23:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:55.841 23:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:55.841 23:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:55.841 23:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.841 23:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:56.099 23:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.099 23:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:56.099 23:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.099 23:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:56.357 23:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.357 23:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:56.357 23:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.357 23:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:56.615 23:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.615 23:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:56.615 23:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.615 23:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:56.873 23:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.873 23:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:56.873 23:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:57.131 23:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:57.390 23:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:58.764 23:01:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:58.764 23:01:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:58.764 23:01:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:58.764 23:01:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:58.764 23:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:58.764 23:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:58.764 23:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:58.764 23:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:59.022 23:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:59.022 23:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:59.022 23:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.022 23:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:59.280 23:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:59.280 23:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:59.280 23:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.280 23:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:59.538 23:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:59.538 23:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:59.538 23:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.538 23:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:59.796 23:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:59.797 23:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:59.797 23:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.797 23:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:00.055 23:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:00.055 23:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:32:00.055 23:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:00.313 23:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:00.571 23:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:32:01.504 23:01:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:32:01.504 23:01:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:01.504 23:01:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:01.504 23:01:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:01.762 23:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:01.762 23:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:01.762 23:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:01.762 23:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:02.020 23:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:02.020 23:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:02.020 23:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:02.020 23:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:02.278 23:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:02.278 23:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:02.278 23:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:02.278 23:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:02.536 23:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:02.536 23:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:02.536 23:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:02.536 23:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:02.793 23:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:02.793 23:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:02.793 23:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:02.793 23:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:03.050 23:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:03.050 23:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:32:03.050 23:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:03.307 23:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:03.563 23:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:32:04.496 23:01:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:32:04.496 23:01:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:04.496 23:01:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:04.496 23:01:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:04.754 23:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:04.754 23:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:04.754 23:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:04.754 23:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:05.012 23:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:05.012 23:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:05.012 23:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:05.012 23:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:05.279 23:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:05.279 23:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:05.279 23:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:05.279 23:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:05.565 23:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:05.565 23:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:05.565 23:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:05.565 23:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:05.565 23:01:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:05.565 23:01:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:05.565 23:01:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:05.565 23:01:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:05.822 23:01:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:05.822 23:01:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:32:06.079 23:01:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:32:06.079 23:01:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:06.337 23:01:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:06.595 23:01:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:32:07.969 23:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:32:07.969 23:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:07.969 23:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:07.969 23:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:07.969 23:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:07.969 23:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:07.969 23:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:07.969 23:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:08.227 23:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:08.227 23:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:08.227 23:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:08.227 23:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:08.484 23:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:08.484 23:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:08.484 23:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:08.484 23:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:08.742 23:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:08.742 23:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:08.742 23:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:08.742 23:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:09.000 23:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:09.000 23:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:09.000 23:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:09.000 23:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:09.258 23:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:09.258 23:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:32:09.258 23:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:09.516 23:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:09.775 23:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:32:10.711 23:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:32:10.711 23:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:10.711 23:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:10.711 23:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:10.970 23:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:10.970 23:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:10.970 23:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:10.970 23:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:11.228 23:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:11.228 23:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:11.228 23:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:11.228 23:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:11.486 23:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:11.486 23:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:11.486 23:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:11.486 23:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:11.744 23:02:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:11.744 23:02:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:11.744 23:02:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:11.744 23:02:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:12.002 23:02:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:12.002 23:02:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:12.002 23:02:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:12.002 23:02:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:12.260 23:02:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:12.260 23:02:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:32:12.260 23:02:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:12.518 23:02:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:12.775 23:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:32:13.707 23:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:32:13.707 23:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:13.707 23:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:13.707 23:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:13.965 23:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:13.965 23:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:13.965 23:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:13.965 23:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:14.224 23:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:14.224 23:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:14.224 23:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:14.224 23:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:14.483 23:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:14.483 23:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:14.483 23:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:14.483 23:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:14.741 23:02:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:14.741 23:02:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:14.741 23:02:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:14.741 23:02:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:14.999 23:02:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:14.999 23:02:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:14.999 23:02:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:14.999 23:02:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:15.257 23:02:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:15.257 23:02:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:32:15.257 23:02:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:15.515 23:02:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:15.774 23:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:32:16.708 23:02:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:32:16.708 23:02:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:16.708 23:02:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:16.708 23:02:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:16.966 23:02:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:16.966 23:02:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:16.966 23:02:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:16.966 23:02:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:17.224 23:02:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:17.224 23:02:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:17.224 23:02:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.224 23:02:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:17.483 23:02:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:17.483 23:02:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:17.483 23:02:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.483 23:02:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:17.741 23:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:17.741 23:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:17.741 23:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.741 23:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:17.999 23:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:17.999 23:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:17.999 23:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.999 23:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:18.259 23:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:18.259 23:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3666650 00:32:18.259 23:02:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 3666650 ']' 00:32:18.259 23:02:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 3666650 00:32:18.259 23:02:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:32:18.259 23:02:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:18.259 23:02:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3666650 00:32:18.259 23:02:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:32:18.259 23:02:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:32:18.259 23:02:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3666650' 00:32:18.259 killing process with pid 3666650 00:32:18.259 23:02:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 3666650 00:32:18.259 23:02:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 3666650 00:32:18.535 Connection closed with partial response: 00:32:18.535 00:32:18.535 00:32:18.535 23:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3666650 00:32:18.535 23:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:18.535 [2024-07-26 23:01:36.831646] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:32:18.535 [2024-07-26 23:01:36.831725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3666650 ] 00:32:18.535 EAL: No free 2048 kB hugepages reported on node 1 00:32:18.535 [2024-07-26 23:01:36.893766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.535 [2024-07-26 23:01:36.986790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:18.535 Running I/O for 90 seconds... 00:32:18.535 [2024-07-26 23:01:52.584006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:92432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.535 [2024-07-26 23:01:52.584078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:18.535 [2024-07-26 23:01:52.584119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:92440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.535 [2024-07-26 23:01:52.584137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:18.535 [2024-07-26 23:01:52.584163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:92448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.535 [2024-07-26 23:01:52.584180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:18.535 [2024-07-26 23:01:52.584203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.535 [2024-07-26 23:01:52.584220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:18.535 [2024-07-26 23:01:52.584242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.535 [2024-07-26 23:01:52.584260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:18.535 [2024-07-26 23:01:52.584282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:92472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.535 [2024-07-26 23:01:52.584299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:18.535 [2024-07-26 23:01:52.584322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.535 [2024-07-26 23:01:52.584339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:18.535 [2024-07-26 23:01:52.584361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.535 [2024-07-26 23:01:52.584378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:18.535 [2024-07-26 23:01:52.584401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:92496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.535 [2024-07-26 23:01:52.584418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:18.535 [2024-07-26 23:01:52.584441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:92504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.535 [2024-07-26 23:01:52.584458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:18.535 [2024-07-26 23:01:52.584481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:92512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.535 [2024-07-26 23:01:52.584510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:18.535 [2024-07-26 23:01:52.584534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.535 [2024-07-26 23:01:52.584551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:18.535 [2024-07-26 23:01:52.584573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:92528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.535 [2024-07-26 23:01:52.584589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:18.535 [2024-07-26 23:01:52.584611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.535 [2024-07-26 23:01:52.584627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:18.535 [2024-07-26 23:01:52.584649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:92544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.535 [2024-07-26 23:01:52.584665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:18.535 [2024-07-26 23:01:52.584687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.535 [2024-07-26 23:01:52.584703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:18.535 [2024-07-26 23:01:52.584725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.535 [2024-07-26 23:01:52.584741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:18.535 [2024-07-26 23:01:52.584764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.535 [2024-07-26 23:01:52.584781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:18.535 [2024-07-26 23:01:52.584803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:92576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.535 [2024-07-26 23:01:52.584819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:18.535 [2024-07-26 23:01:52.584841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.535 [2024-07-26 23:01:52.584857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:18.535 [2024-07-26 23:01:52.584880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:92592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.535 [2024-07-26 23:01:52.584896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:18.535 [2024-07-26 23:01:52.585405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.535 [2024-07-26 23:01:52.585429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:18.535 [2024-07-26 23:01:52.585456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.535 [2024-07-26 23:01:52.585480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:18.535 [2024-07-26 23:01:52.585509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:92608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.535 [2024-07-26 23:01:52.585527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:18.535 [2024-07-26 23:01:52.585549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.535 [2024-07-26 23:01:52.585565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:18.535 [2024-07-26 23:01:52.585587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:92624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.535 [2024-07-26 23:01:52.585603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:18.535 [2024-07-26 23:01:52.585625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.535 [2024-07-26 23:01:52.585641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:18.535 [2024-07-26 23:01:52.585663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:92640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.535 [2024-07-26 23:01:52.585679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:18.535 [2024-07-26 23:01:52.585702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.535 [2024-07-26 23:01:52.585718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:18.535 [2024-07-26 23:01:52.585740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.535 [2024-07-26 23:01:52.585756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:18.535 [2024-07-26 23:01:52.585778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.535 [2024-07-26 23:01:52.585794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:18.535 [2024-07-26 23:01:52.585816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.535 [2024-07-26 23:01:52.585832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:18.536 [2024-07-26 23:01:52.585854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.536 [2024-07-26 23:01:52.585870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:18.536 [2024-07-26 23:01:52.585893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.536 [2024-07-26 23:01:52.585909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:18.536 [2024-07-26 23:01:52.585931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.536 [2024-07-26 23:01:52.585947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:18.536 [2024-07-26 23:01:52.585974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.536 [2024-07-26 23:01:52.585991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:18.536 [2024-07-26 23:01:52.586014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.536 [2024-07-26 23:01:52.586030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:18.536 [2024-07-26 23:01:52.586052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:92720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.536 [2024-07-26 23:01:52.586080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:18.536 [2024-07-26 23:01:52.586103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:92728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.536 [2024-07-26 23:01:52.586120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:18.536 [2024-07-26 23:01:52.586143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:92736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.536 [2024-07-26 23:01:52.586159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:18.536 [2024-07-26 23:01:52.586181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.536 [2024-07-26 23:01:52.586197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:18.536 [2024-07-26 23:01:52.586219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.536 [2024-07-26 23:01:52.586235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:18.536 [2024-07-26 23:01:52.586257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:92760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.536 [2024-07-26 23:01:52.586274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:18.536 [2024-07-26 23:01:52.586295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:92768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.536 [2024-07-26 23:01:52.586311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:18.536 [2024-07-26 23:01:52.586333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.536 [2024-07-26 23:01:52.586350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:18.536 [2024-07-26 23:01:52.586372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:92784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.536 [2024-07-26 23:01:52.586389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:18.536 [2024-07-26 23:01:52.586411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:92792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.536 [2024-07-26 23:01:52.586427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:18.536 [2024-07-26 23:01:52.586449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.536 [2024-07-26 23:01:52.586469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:18.536 [2024-07-26 23:01:52.586492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:92808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.536 [2024-07-26 23:01:52.586508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:18.536 [2024-07-26 23:01:52.586531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.536 [2024-07-26 23:01:52.586547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:18.536 [2024-07-26 23:01:52.586569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.536 [2024-07-26 23:01:52.586586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:18.536 [2024-07-26 23:01:52.586608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.536 [2024-07-26 23:01:52.586624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:18.536 [2024-07-26 23:01:52.586646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:92840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.536 [2024-07-26 23:01:52.586663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:18.536 [2024-07-26 23:01:52.586685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:92848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.536 [2024-07-26 23:01:52.586701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:18.536 [2024-07-26 23:01:52.586723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.536 [2024-07-26 23:01:52.586739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:18.536 [2024-07-26 23:01:52.586761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:92864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.536 [2024-07-26 23:01:52.586777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:18.536 [2024-07-26 23:01:52.586799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:92872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.536 [2024-07-26 23:01:52.586816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:18.536 [2024-07-26 23:01:52.586838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:92880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.536 [2024-07-26 23:01:52.586854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:18.536 [2024-07-26 23:01:52.586876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:92888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.536 [2024-07-26 23:01:52.586892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:18.536 [2024-07-26 23:01:52.586914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:92896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.536 [2024-07-26 23:01:52.586935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:18.536 [2024-07-26 23:01:52.586958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:92904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.536 [2024-07-26 23:01:52.586974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:18.536 [2024-07-26 23:01:52.586996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.536 [2024-07-26 23:01:52.587012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:18.536 [2024-07-26 23:01:52.587034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:92920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.536 [2024-07-26 23:01:52.587051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:18.536 [2024-07-26 23:01:52.587083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:92928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.536 [2024-07-26 23:01:52.587133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:18.536 [2024-07-26 23:01:52.587157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.536 [2024-07-26 23:01:52.587174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:18.536 [2024-07-26 23:01:52.587197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.536 [2024-07-26 23:01:52.587213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:18.536 [2024-07-26 23:01:52.587236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.536 [2024-07-26 23:01:52.587252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:18.536 [2024-07-26 23:01:52.587274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:92960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.536 [2024-07-26 23:01:52.587289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:18.536 [2024-07-26 23:01:52.587312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:92968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.537 [2024-07-26 23:01:52.587328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:18.537 [2024-07-26 23:01:52.587350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:92976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.537 [2024-07-26 23:01:52.587366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:18.537 [2024-07-26 23:01:52.587388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.537 [2024-07-26 23:01:52.587405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:18.537 [2024-07-26 23:01:52.587427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.537 [2024-07-26 23:01:52.587443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:18.537 [2024-07-26 23:01:52.587469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:93000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.537 [2024-07-26 23:01:52.587486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:18.537 [2024-07-26 23:01:52.587508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.537 [2024-07-26 23:01:52.587524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:18.537 [2024-07-26 23:01:52.587546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:93016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.537 [2024-07-26 23:01:52.587562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:18.537 [2024-07-26 23:01:52.587584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:93024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.537 [2024-07-26 23:01:52.587600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:18.537 [2024-07-26 23:01:52.587623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:93032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.537 [2024-07-26 23:01:52.587639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:18.537 [2024-07-26 23:01:52.588298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:93040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.537 [2024-07-26 23:01:52.588322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:18.537 [2024-07-26 23:01:52.588350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:93048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.537 [2024-07-26 23:01:52.588368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:18.537 [2024-07-26 23:01:52.588390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:93056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.537 [2024-07-26 23:01:52.588407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:18.537 [2024-07-26 23:01:52.588430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.537 [2024-07-26 23:01:52.588446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:18.537 [2024-07-26 23:01:52.588470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:93072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.537 [2024-07-26 23:01:52.588487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:18.537 [2024-07-26 23:01:52.588514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:93080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.537 [2024-07-26 23:01:52.588531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:18.537 [2024-07-26 23:01:52.588553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:93088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.537 [2024-07-26 23:01:52.588570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:18.537 [2024-07-26 23:01:52.588598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:93096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.537 [2024-07-26 23:01:52.588615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:18.537 [2024-07-26 23:01:52.588637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:93104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.537 [2024-07-26 23:01:52.588654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:18.537 [2024-07-26 23:01:52.588676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:93112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.537 [2024-07-26 23:01:52.588693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:18.537 [2024-07-26 23:01:52.588715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:93120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.537 [2024-07-26 23:01:52.588732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:18.537 [2024-07-26 23:01:52.588753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:93128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.537 [2024-07-26 23:01:52.588770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.537 [2024-07-26 23:01:52.588792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.537 [2024-07-26 23:01:52.588809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.537 [2024-07-26 23:01:52.588831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:93144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.537 [2024-07-26 23:01:52.588847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:18.537 [2024-07-26 23:01:52.588871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:93152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.537 [2024-07-26 23:01:52.588887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:18.537 [2024-07-26 23:01:52.588909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:93160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.537 [2024-07-26 23:01:52.588925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:18.537 [2024-07-26 23:01:52.588954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:93168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.537 [2024-07-26 23:01:52.588971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:18.537 [2024-07-26 23:01:52.588994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:93176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.537 [2024-07-26 23:01:52.589010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:18.537 [2024-07-26 23:01:52.589033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:93184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.537 [2024-07-26 23:01:52.589049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:18.537 [2024-07-26 23:01:52.589080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:93192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.537 [2024-07-26 23:01:52.589102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:18.537 [2024-07-26 23:01:52.589126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:93200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.537 [2024-07-26 23:01:52.589143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:18.537 [2024-07-26 23:01:52.589165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:93208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.537 [2024-07-26 23:01:52.589182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:18.537 [2024-07-26 23:01:52.589204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.537 [2024-07-26 23:01:52.589221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:18.537 [2024-07-26 23:01:52.589243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.537 [2024-07-26 23:01:52.589259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:18.537 [2024-07-26 23:01:52.589282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:92392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.537 [2024-07-26 23:01:52.589298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:18.537 [2024-07-26 23:01:52.589321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.538 [2024-07-26 23:01:52.589337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:18.538 [2024-07-26 23:01:52.589359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.538 [2024-07-26 23:01:52.589375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:18.538 [2024-07-26 23:01:52.589397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:92416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.538 [2024-07-26 23:01:52.589413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:18.538 [2024-07-26 23:01:52.589435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.538 [2024-07-26 23:01:52.589451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:18.538 [2024-07-26 23:01:52.589473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.538 [2024-07-26 23:01:52.589489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:18.538 [2024-07-26 23:01:52.589511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:93224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.538 [2024-07-26 23:01:52.589527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:18.538 [2024-07-26 23:01:52.589549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:93232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.538 [2024-07-26 23:01:52.589572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:18.538 [2024-07-26 23:01:52.589601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.538 [2024-07-26 23:01:52.589618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:18.538 [2024-07-26 23:01:52.589640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.538 [2024-07-26 23:01:52.589656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:18.538 [2024-07-26 23:01:52.589678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.538 [2024-07-26 23:01:52.589694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:18.538 [2024-07-26 23:01:52.589717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.538 [2024-07-26 23:01:52.589733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:18.538 [2024-07-26 23:01:52.589755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.538 [2024-07-26 23:01:52.589772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:18.538 [2024-07-26 23:01:52.589794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.538 [2024-07-26 23:01:52.589810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:18.538 [2024-07-26 23:01:52.589832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.538 [2024-07-26 23:01:52.589848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:18.538 [2024-07-26 23:01:52.589870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.538 [2024-07-26 23:01:52.589887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:18.538 [2024-07-26 23:01:52.589909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.538 [2024-07-26 23:01:52.589925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:18.538 [2024-07-26 23:01:52.589947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.538 [2024-07-26 23:01:52.589963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:18.538 [2024-07-26 23:01:52.589985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.538 [2024-07-26 23:01:52.590001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:18.538 [2024-07-26 23:01:52.590023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.538 [2024-07-26 23:01:52.590039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:18.538 [2024-07-26 23:01:52.590072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.538 [2024-07-26 23:01:52.590091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:18.538 [2024-07-26 23:01:52.590113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.538 [2024-07-26 23:01:52.590130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:18.538 [2024-07-26 23:01:52.590152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.538 [2024-07-26 23:01:52.590168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:18.538 [2024-07-26 23:01:52.590190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.538 [2024-07-26 23:01:52.590207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:18.538 [2024-07-26 23:01:52.590235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.538 [2024-07-26 23:01:52.590252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:18.538 [2024-07-26 23:01:52.590274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.538 [2024-07-26 23:01:52.590290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:18.538 [2024-07-26 23:01:52.590313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:92432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.538 [2024-07-26 23:01:52.590329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:18.538 [2024-07-26 23:01:52.590351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:92440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.538 [2024-07-26 23:01:52.590367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:18.538 [2024-07-26 23:01:52.590390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:92448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.538 [2024-07-26 23:01:52.590406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:18.538 [2024-07-26 23:01:52.590428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.538 [2024-07-26 23:01:52.590445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:18.538 [2024-07-26 23:01:52.590467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.538 [2024-07-26 23:01:52.590483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:18.538 [2024-07-26 23:01:52.590506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.538 [2024-07-26 23:01:52.590522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:18.538 [2024-07-26 23:01:52.590548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.538 [2024-07-26 23:01:52.590565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:18.538 [2024-07-26 23:01:52.590587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.538 [2024-07-26 23:01:52.590604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:18.538 [2024-07-26 23:01:52.590626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:92496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.538 [2024-07-26 23:01:52.590642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:18.538 [2024-07-26 23:01:52.590665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:92504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.538 [2024-07-26 23:01:52.590680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:18.539 [2024-07-26 23:01:52.590702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:92512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.539 [2024-07-26 23:01:52.590719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:18.539 [2024-07-26 23:01:52.590741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:92520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.539 [2024-07-26 23:01:52.590758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:18.539 [2024-07-26 23:01:52.590780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:92528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.539 [2024-07-26 23:01:52.590796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:18.539 [2024-07-26 23:01:52.590818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:92536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.539 [2024-07-26 23:01:52.590835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:18.539 [2024-07-26 23:01:52.590858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.539 [2024-07-26 23:01:52.590874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:18.539 [2024-07-26 23:01:52.590896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.539 [2024-07-26 23:01:52.590912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:18.539 [2024-07-26 23:01:52.590934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.539 [2024-07-26 23:01:52.590951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:18.539 [2024-07-26 23:01:52.590973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.539 [2024-07-26 23:01:52.590989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:18.539 [2024-07-26 23:01:52.591018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:92576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.539 [2024-07-26 23:01:52.591038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:18.539 [2024-07-26 23:01:52.591072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.539 [2024-07-26 23:01:52.591091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:18.539 [2024-07-26 23:01:52.591849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.539 [2024-07-26 23:01:52.591872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:18.539 [2024-07-26 23:01:52.591898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:93384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.539 [2024-07-26 23:01:52.591916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:18.539 [2024-07-26 23:01:52.591939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.539 [2024-07-26 23:01:52.591956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:18.539 [2024-07-26 23:01:52.591983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.539 [2024-07-26 23:01:52.592001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:18.539 [2024-07-26 23:01:52.592023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.539 [2024-07-26 23:01:52.592039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:18.539 [2024-07-26 23:01:52.592069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.539 [2024-07-26 23:01:52.592088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:18.539 [2024-07-26 23:01:52.592110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.539 [2024-07-26 23:01:52.592127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:18.539 [2024-07-26 23:01:52.592149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:92632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.539 [2024-07-26 23:01:52.592165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:18.539 [2024-07-26 23:01:52.592188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:92640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.539 [2024-07-26 23:01:52.592204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:18.539 [2024-07-26 23:01:52.592226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:92648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.539 [2024-07-26 23:01:52.592242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:18.539 [2024-07-26 23:01:52.592265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.539 [2024-07-26 23:01:52.592286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:18.539 [2024-07-26 23:01:52.592309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.539 [2024-07-26 23:01:52.592326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:18.539 [2024-07-26 23:01:52.592348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.539 [2024-07-26 23:01:52.592365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:18.539 [2024-07-26 23:01:52.592387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.539 [2024-07-26 23:01:52.592403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:18.539 [2024-07-26 23:01:52.592426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.539 [2024-07-26 23:01:52.592442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:18.539 [2024-07-26 23:01:52.592465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.539 [2024-07-26 23:01:52.592481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:18.539 [2024-07-26 23:01:52.592503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.539 [2024-07-26 23:01:52.592519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:18.539 [2024-07-26 23:01:52.592541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:92712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.539 [2024-07-26 23:01:52.592557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:18.539 [2024-07-26 23:01:52.592579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:92720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.539 [2024-07-26 23:01:52.592595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:18.540 [2024-07-26 23:01:52.592617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:92728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.540 [2024-07-26 23:01:52.592633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:18.540 [2024-07-26 23:01:52.592655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:92736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.540 [2024-07-26 23:01:52.592671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:18.540 [2024-07-26 23:01:52.592693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.540 [2024-07-26 23:01:52.592709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:18.540 [2024-07-26 23:01:52.592731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.540 [2024-07-26 23:01:52.592746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:18.540 [2024-07-26 23:01:52.592774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:92760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.540 [2024-07-26 23:01:52.592790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:18.540 [2024-07-26 23:01:52.592812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:92768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.540 [2024-07-26 23:01:52.592828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:18.540 [2024-07-26 23:01:52.592851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.540 [2024-07-26 23:01:52.592867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:18.540 [2024-07-26 23:01:52.592889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.540 [2024-07-26 23:01:52.592905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:18.540 [2024-07-26 23:01:52.592927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:92792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.540 [2024-07-26 23:01:52.592943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:18.540 [2024-07-26 23:01:52.592965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.540 [2024-07-26 23:01:52.592981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:18.540 [2024-07-26 23:01:52.593003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.540 [2024-07-26 23:01:52.593019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:18.540 [2024-07-26 23:01:52.593042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.540 [2024-07-26 23:01:52.593064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:18.540 [2024-07-26 23:01:52.593089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.540 [2024-07-26 23:01:52.593106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:18.540 [2024-07-26 23:01:52.593129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.540 [2024-07-26 23:01:52.593145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:18.540 [2024-07-26 23:01:52.593167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:92840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.540 [2024-07-26 23:01:52.593184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:18.540 [2024-07-26 23:01:52.593206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:92848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.540 [2024-07-26 23:01:52.593222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:18.540 [2024-07-26 23:01:52.593247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.540 [2024-07-26 23:01:52.593264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:18.540 [2024-07-26 23:01:52.593286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:92864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.540 [2024-07-26 23:01:52.593302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:18.540 [2024-07-26 23:01:52.593324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:92872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.540 [2024-07-26 23:01:52.593340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:18.540 [2024-07-26 23:01:52.593363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:92880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.540 [2024-07-26 23:01:52.593379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:18.540 [2024-07-26 23:01:52.593401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:92888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.540 [2024-07-26 23:01:52.593417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:18.540 [2024-07-26 23:01:52.593439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.540 [2024-07-26 23:01:52.593455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:18.540 [2024-07-26 23:01:52.593477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.540 [2024-07-26 23:01:52.593493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:18.540 [2024-07-26 23:01:52.593515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.540 [2024-07-26 23:01:52.593531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:18.540 [2024-07-26 23:01:52.593553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:92920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.540 [2024-07-26 23:01:52.593569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:18.540 [2024-07-26 23:01:52.593592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.540 [2024-07-26 23:01:52.593608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:18.540 [2024-07-26 23:01:52.593630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:92936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.540 [2024-07-26 23:01:52.593646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:18.540 [2024-07-26 23:01:52.593668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.540 [2024-07-26 23:01:52.593684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:18.540 [2024-07-26 23:01:52.593706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.540 [2024-07-26 23:01:52.593726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:18.540 [2024-07-26 23:01:52.593748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.540 [2024-07-26 23:01:52.593765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:18.540 [2024-07-26 23:01:52.593787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:92968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.540 [2024-07-26 23:01:52.593803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:18.540 [2024-07-26 23:01:52.593825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.540 [2024-07-26 23:01:52.593840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:18.540 [2024-07-26 23:01:52.593862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:92984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.540 [2024-07-26 23:01:52.593878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:18.540 [2024-07-26 23:01:52.593900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:92992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.541 [2024-07-26 23:01:52.593916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:18.541 [2024-07-26 23:01:52.593945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:93000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.541 [2024-07-26 23:01:52.593961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:18.541 [2024-07-26 23:01:52.593984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:93008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.541 [2024-07-26 23:01:52.594000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:18.541 [2024-07-26 23:01:52.594023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:93016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.541 [2024-07-26 23:01:52.594039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:18.541 [2024-07-26 23:01:52.594069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:93024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.541 [2024-07-26 23:01:52.594088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:18.541 [2024-07-26 23:01:52.594762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:93032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.541 [2024-07-26 23:01:52.594785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:18.541 [2024-07-26 23:01:52.594817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.541 [2024-07-26 23:01:52.594836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:18.541 [2024-07-26 23:01:52.594859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:93048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.541 [2024-07-26 23:01:52.594880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:18.541 [2024-07-26 23:01:52.594903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:93056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.541 [2024-07-26 23:01:52.594919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:18.541 [2024-07-26 23:01:52.594941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:93064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.541 [2024-07-26 23:01:52.594957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:18.541 [2024-07-26 23:01:52.594980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:93072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.541 [2024-07-26 23:01:52.594996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:18.541 [2024-07-26 23:01:52.595018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:93080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.541 [2024-07-26 23:01:52.595034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:18.541 [2024-07-26 23:01:52.595056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:93088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.541 [2024-07-26 23:01:52.595084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:18.541 [2024-07-26 23:01:52.595108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:93096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.541 [2024-07-26 23:01:52.595125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:18.541 [2024-07-26 23:01:52.595147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:93104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.541 [2024-07-26 23:01:52.595163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:18.541 [2024-07-26 23:01:52.595185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:93112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.541 [2024-07-26 23:01:52.595201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:18.541 [2024-07-26 23:01:52.595223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:93120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.541 [2024-07-26 23:01:52.595239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:18.541 [2024-07-26 23:01:52.595267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:93128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.541 [2024-07-26 23:01:52.595283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.541 [2024-07-26 23:01:52.595306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:93136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.541 [2024-07-26 23:01:52.595322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.541 [2024-07-26 23:01:52.595344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:93144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.541 [2024-07-26 23:01:52.595360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:18.541 [2024-07-26 23:01:52.595387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:93152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.541 [2024-07-26 23:01:52.595404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:18.541 [2024-07-26 23:01:52.595426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:93160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.541 [2024-07-26 23:01:52.595442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:18.541 [2024-07-26 23:01:52.595464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:93168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.541 [2024-07-26 23:01:52.595480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:18.541 [2024-07-26 23:01:52.595503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:93176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.541 [2024-07-26 23:01:52.595519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:18.541 [2024-07-26 23:01:52.595541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:93184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.541 [2024-07-26 23:01:52.595557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:18.541 [2024-07-26 23:01:52.595578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:93192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.541 [2024-07-26 23:01:52.595594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:18.541 [2024-07-26 23:01:52.595624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:93200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.541 [2024-07-26 23:01:52.595641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:18.541 [2024-07-26 23:01:52.595663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:93208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.541 [2024-07-26 23:01:52.595679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:18.541 [2024-07-26 23:01:52.595702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:92376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.541 [2024-07-26 23:01:52.595718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:18.541 [2024-07-26 23:01:52.595740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.541 [2024-07-26 23:01:52.595756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:18.541 [2024-07-26 23:01:52.595779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.541 [2024-07-26 23:01:52.595794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:18.541 [2024-07-26 23:01:52.595817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:92400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.541 [2024-07-26 23:01:52.595832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:18.541 [2024-07-26 23:01:52.595858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.541 [2024-07-26 23:01:52.595875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:18.541 [2024-07-26 23:01:52.595903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.541 [2024-07-26 23:01:52.595920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:18.541 [2024-07-26 23:01:52.595943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.542 [2024-07-26 23:01:52.595959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:18.542 [2024-07-26 23:01:52.595982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.542 [2024-07-26 23:01:52.595998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:18.542 [2024-07-26 23:01:52.596020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:93224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.542 [2024-07-26 23:01:52.596037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:18.542 [2024-07-26 23:01:52.596066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.542 [2024-07-26 23:01:52.596085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:18.542 [2024-07-26 23:01:52.596108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.542 [2024-07-26 23:01:52.596124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:18.542 [2024-07-26 23:01:52.596146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.542 [2024-07-26 23:01:52.596163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:18.542 [2024-07-26 23:01:52.596185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.542 [2024-07-26 23:01:52.596201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:18.542 [2024-07-26 23:01:52.596223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.542 [2024-07-26 23:01:52.596239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:18.542 [2024-07-26 23:01:52.596267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.542 [2024-07-26 23:01:52.596284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:18.542 [2024-07-26 23:01:52.596306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.542 [2024-07-26 23:01:52.596323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:18.542 [2024-07-26 23:01:52.596346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.542 [2024-07-26 23:01:52.596366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:18.542 [2024-07-26 23:01:52.596389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.542 [2024-07-26 23:01:52.596405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:18.542 [2024-07-26 23:01:52.596427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.542 [2024-07-26 23:01:52.596443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:18.542 [2024-07-26 23:01:52.596465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.542 [2024-07-26 23:01:52.596481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:18.542 [2024-07-26 23:01:52.596503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.542 [2024-07-26 23:01:52.596520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:18.542 [2024-07-26 23:01:52.596542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.542 [2024-07-26 23:01:52.596559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:18.542 [2024-07-26 23:01:52.596581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.542 [2024-07-26 23:01:52.596596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:18.542 [2024-07-26 23:01:52.596619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.542 [2024-07-26 23:01:52.596635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:18.542 [2024-07-26 23:01:52.596657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.542 [2024-07-26 23:01:52.596672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:18.542 [2024-07-26 23:01:52.596694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.542 [2024-07-26 23:01:52.596710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:18.542 [2024-07-26 23:01:52.596732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.542 [2024-07-26 23:01:52.596748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:18.542 [2024-07-26 23:01:52.596771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.542 [2024-07-26 23:01:52.596787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:18.542 [2024-07-26 23:01:52.596809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:92432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.542 [2024-07-26 23:01:52.596829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:18.542 [2024-07-26 23:01:52.596852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:92440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.542 [2024-07-26 23:01:52.596869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:18.542 [2024-07-26 23:01:52.596891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:92448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.542 [2024-07-26 23:01:52.596907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:18.542 [2024-07-26 23:01:52.596929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.542 [2024-07-26 23:01:52.596945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:18.542 [2024-07-26 23:01:52.596967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.542 [2024-07-26 23:01:52.596984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:18.542 [2024-07-26 23:01:52.597005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:92472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.542 [2024-07-26 23:01:52.597021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:18.542 [2024-07-26 23:01:52.597043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.542 [2024-07-26 23:01:52.597065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:18.542 [2024-07-26 23:01:52.597090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.542 [2024-07-26 23:01:52.597105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:18.542 [2024-07-26 23:01:52.597127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:92496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.542 [2024-07-26 23:01:52.597143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:18.542 [2024-07-26 23:01:52.597165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.542 [2024-07-26 23:01:52.597181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:18.542 [2024-07-26 23:01:52.597203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:92512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.542 [2024-07-26 23:01:52.597219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:18.542 [2024-07-26 23:01:52.597241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.542 [2024-07-26 23:01:52.597257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:18.542 [2024-07-26 23:01:52.597279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:92528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.542 [2024-07-26 23:01:52.597295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:18.542 [2024-07-26 23:01:52.597321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:92536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.542 [2024-07-26 23:01:52.597338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:18.542 [2024-07-26 23:01:52.597360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.542 [2024-07-26 23:01:52.597376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:18.542 [2024-07-26 23:01:52.597398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.543 [2024-07-26 23:01:52.597414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:18.543 [2024-07-26 23:01:52.597436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.543 [2024-07-26 23:01:52.597452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:18.543 [2024-07-26 23:01:52.597474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.543 [2024-07-26 23:01:52.597490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:18.543 [2024-07-26 23:01:52.597513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:92576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.543 [2024-07-26 23:01:52.597529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:18.543 [2024-07-26 23:01:52.598282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.543 [2024-07-26 23:01:52.598305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:18.543 [2024-07-26 23:01:52.598332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:92592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.543 [2024-07-26 23:01:52.598350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:18.543 [2024-07-26 23:01:52.598372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:93384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.543 [2024-07-26 23:01:52.598389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:18.543 [2024-07-26 23:01:52.598415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.543 [2024-07-26 23:01:52.598433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:18.543 [2024-07-26 23:01:52.598455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:92600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.543 [2024-07-26 23:01:52.598471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:18.543 [2024-07-26 23:01:52.598493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:92608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.543 [2024-07-26 23:01:52.598509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:18.543 [2024-07-26 23:01:52.598536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.543 [2024-07-26 23:01:52.598552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:18.543 [2024-07-26 23:01:52.598574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:92624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.543 [2024-07-26 23:01:52.598590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:18.543 [2024-07-26 23:01:52.598612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.543 [2024-07-26 23:01:52.598628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:18.543 [2024-07-26 23:01:52.598650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.543 [2024-07-26 23:01:52.598670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:18.543 [2024-07-26 23:01:52.598692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.543 [2024-07-26 23:01:52.598708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:18.543 [2024-07-26 23:01:52.598730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:92656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.543 [2024-07-26 23:01:52.598746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:18.543 [2024-07-26 23:01:52.598768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.543 [2024-07-26 23:01:52.598784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:18.543 [2024-07-26 23:01:52.598806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.543 [2024-07-26 23:01:52.598821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:18.543 [2024-07-26 23:01:52.598844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.543 [2024-07-26 23:01:52.598860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:18.543 [2024-07-26 23:01:52.598882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.543 [2024-07-26 23:01:52.598898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:18.543 [2024-07-26 23:01:52.598920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.543 [2024-07-26 23:01:52.598936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:18.543 [2024-07-26 23:01:52.598958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.543 [2024-07-26 23:01:52.598974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:18.543 [2024-07-26 23:01:52.598996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:92712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.543 [2024-07-26 23:01:52.599018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:18.543 [2024-07-26 23:01:52.599041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:92720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.543 [2024-07-26 23:01:52.599065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:18.543 [2024-07-26 23:01:52.599090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:92728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.543 [2024-07-26 23:01:52.599107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:18.543 [2024-07-26 23:01:52.599129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.543 [2024-07-26 23:01:52.599145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:18.543 [2024-07-26 23:01:52.599167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.543 [2024-07-26 23:01:52.599183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:18.543 [2024-07-26 23:01:52.599205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:92752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.543 [2024-07-26 23:01:52.599221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:18.543 [2024-07-26 23:01:52.599243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.543 [2024-07-26 23:01:52.599266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:18.543 [2024-07-26 23:01:52.599289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:92768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.543 [2024-07-26 23:01:52.599305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:18.543 [2024-07-26 23:01:52.599327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:92776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.543 [2024-07-26 23:01:52.599343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:18.543 [2024-07-26 23:01:52.599365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.543 [2024-07-26 23:01:52.599381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:18.543 [2024-07-26 23:01:52.599403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:92792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.543 [2024-07-26 23:01:52.599419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:18.543 [2024-07-26 23:01:52.599441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.543 [2024-07-26 23:01:52.599457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:18.543 [2024-07-26 23:01:52.599479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.544 [2024-07-26 23:01:52.599499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:18.544 [2024-07-26 23:01:52.599522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.544 [2024-07-26 23:01:52.599539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:18.544 [2024-07-26 23:01:52.599561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.544 [2024-07-26 23:01:52.599578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:18.544 [2024-07-26 23:01:52.599600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.544 [2024-07-26 23:01:52.599616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:18.544 [2024-07-26 23:01:52.599638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.544 [2024-07-26 23:01:52.599654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:18.544 [2024-07-26 23:01:52.599676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:92848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.544 [2024-07-26 23:01:52.599692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:18.544 [2024-07-26 23:01:52.599714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:92856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.544 [2024-07-26 23:01:52.599730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:18.544 [2024-07-26 23:01:52.599752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:92864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.544 [2024-07-26 23:01:52.599768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:18.544 [2024-07-26 23:01:52.599790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:92872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.544 [2024-07-26 23:01:52.599806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:18.544 [2024-07-26 23:01:52.599828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:92880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.544 [2024-07-26 23:01:52.599843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:18.544 [2024-07-26 23:01:52.599866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:92888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.544 [2024-07-26 23:01:52.599882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:18.544 [2024-07-26 23:01:52.599904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.544 [2024-07-26 23:01:52.599920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:18.544 [2024-07-26 23:01:52.599941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:92904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.544 [2024-07-26 23:01:52.599957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:18.544 [2024-07-26 23:01:52.599983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:92912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.544 [2024-07-26 23:01:52.600000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:18.544 [2024-07-26 23:01:52.600022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.544 [2024-07-26 23:01:52.600038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:18.544 [2024-07-26 23:01:52.600066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.544 [2024-07-26 23:01:52.600083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:18.544 [2024-07-26 23:01:52.600106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.544 [2024-07-26 23:01:52.600122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:18.544 [2024-07-26 23:01:52.600150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:92944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.544 [2024-07-26 23:01:52.600167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:18.544 [2024-07-26 23:01:52.600189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:92952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.544 [2024-07-26 23:01:52.600206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:18.544 [2024-07-26 23:01:52.600228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:92960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.544 [2024-07-26 23:01:52.600244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:18.544 [2024-07-26 23:01:52.600266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.544 [2024-07-26 23:01:52.600282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:18.544 [2024-07-26 23:01:52.600304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.544 [2024-07-26 23:01:52.600320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:18.544 [2024-07-26 23:01:52.600342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.544 [2024-07-26 23:01:52.600358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:18.544 [2024-07-26 23:01:52.600380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.544 [2024-07-26 23:01:52.600397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:18.544 [2024-07-26 23:01:52.600419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:93000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.544 [2024-07-26 23:01:52.600435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:18.544 [2024-07-26 23:01:52.600461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:93008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.544 [2024-07-26 23:01:52.600478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:18.544 [2024-07-26 23:01:52.600501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:93016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.544 [2024-07-26 23:01:52.600517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:18.544 [2024-07-26 23:01:52.601162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:93024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.544 [2024-07-26 23:01:52.601185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:18.544 [2024-07-26 23:01:52.601212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:93032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.544 [2024-07-26 23:01:52.601230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:18.544 [2024-07-26 23:01:52.601253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:93040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.544 [2024-07-26 23:01:52.601269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:18.544 [2024-07-26 23:01:52.601292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.545 [2024-07-26 23:01:52.601308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:18.545 [2024-07-26 23:01:52.601330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:93056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.545 [2024-07-26 23:01:52.601346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:18.545 [2024-07-26 23:01:52.601372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:93064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.545 [2024-07-26 23:01:52.601389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:18.545 [2024-07-26 23:01:52.601419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:93072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.545 [2024-07-26 23:01:52.601436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:18.545 [2024-07-26 23:01:52.601458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:93080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.545 [2024-07-26 23:01:52.601474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:18.545 [2024-07-26 23:01:52.601496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:93088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.545 [2024-07-26 23:01:52.601512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:18.545 [2024-07-26 23:01:52.601535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:93096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.545 [2024-07-26 23:01:52.601551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:18.545 [2024-07-26 23:01:52.601573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:93104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.545 [2024-07-26 23:01:52.601595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:18.545 [2024-07-26 23:01:52.601618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:93112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.545 [2024-07-26 23:01:52.601635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:18.545 [2024-07-26 23:01:52.601657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.545 [2024-07-26 23:01:52.601673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:18.545 [2024-07-26 23:01:52.601695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:93128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.545 [2024-07-26 23:01:52.601711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.545 [2024-07-26 23:01:52.601733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:93136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.545 [2024-07-26 23:01:52.601750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.545 [2024-07-26 23:01:52.601772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:93144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.545 [2024-07-26 23:01:52.601788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:18.545 [2024-07-26 23:01:52.608422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:93152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.545 [2024-07-26 23:01:52.608454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:18.545 [2024-07-26 23:01:52.608494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:93160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.545 [2024-07-26 23:01:52.608511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:18.545 [2024-07-26 23:01:52.608533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:93168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.545 [2024-07-26 23:01:52.608550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:18.545 [2024-07-26 23:01:52.608571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:93176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.545 [2024-07-26 23:01:52.608586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:18.545 [2024-07-26 23:01:52.608608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:93184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.545 [2024-07-26 23:01:52.608623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:18.545 [2024-07-26 23:01:52.608645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:93192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.545 [2024-07-26 23:01:52.608661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:18.545 [2024-07-26 23:01:52.608683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.545 [2024-07-26 23:01:52.608704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:18.545 [2024-07-26 23:01:52.608727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.545 [2024-07-26 23:01:52.608743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:18.545 [2024-07-26 23:01:52.608764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:92376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.545 [2024-07-26 23:01:52.608780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:18.545 [2024-07-26 23:01:52.608801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.545 [2024-07-26 23:01:52.608816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:18.545 [2024-07-26 23:01:52.608838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:92392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.545 [2024-07-26 23:01:52.608853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:18.545 [2024-07-26 23:01:52.608874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:92400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.545 [2024-07-26 23:01:52.608890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:18.545 [2024-07-26 23:01:52.608911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.545 [2024-07-26 23:01:52.608926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:18.545 [2024-07-26 23:01:52.608947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.545 [2024-07-26 23:01:52.608962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:18.545 [2024-07-26 23:01:52.608984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.545 [2024-07-26 23:01:52.608999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:18.545 [2024-07-26 23:01:52.609021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.545 [2024-07-26 23:01:52.609051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:18.545 [2024-07-26 23:01:52.609086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:93224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.545 [2024-07-26 23:01:52.609103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:18.545 [2024-07-26 23:01:52.609125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:93232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.545 [2024-07-26 23:01:52.609141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:18.545 [2024-07-26 23:01:52.609163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.545 [2024-07-26 23:01:52.609179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:18.545 [2024-07-26 23:01:52.609205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.545 [2024-07-26 23:01:52.609222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:18.545 [2024-07-26 23:01:52.609244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.545 [2024-07-26 23:01:52.609260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:18.545 [2024-07-26 23:01:52.609282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.545 [2024-07-26 23:01:52.609298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:18.545 [2024-07-26 23:01:52.609320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.545 [2024-07-26 23:01:52.609337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:18.546 [2024-07-26 23:01:52.609358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.546 [2024-07-26 23:01:52.609389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:18.546 [2024-07-26 23:01:52.609412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.546 [2024-07-26 23:01:52.609427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:18.546 [2024-07-26 23:01:52.609449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.546 [2024-07-26 23:01:52.609464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:18.546 [2024-07-26 23:01:52.609485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.546 [2024-07-26 23:01:52.609501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:18.546 [2024-07-26 23:01:52.609522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.546 [2024-07-26 23:01:52.609538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:18.546 [2024-07-26 23:01:52.609559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.546 [2024-07-26 23:01:52.609575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:18.546 [2024-07-26 23:01:52.609596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.546 [2024-07-26 23:01:52.609611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:18.546 [2024-07-26 23:01:52.609632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.546 [2024-07-26 23:01:52.609648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:18.546 [2024-07-26 23:01:52.609673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.546 [2024-07-26 23:01:52.609689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:18.546 [2024-07-26 23:01:52.609710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.546 [2024-07-26 23:01:52.609726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:18.546 [2024-07-26 23:01:52.609747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.546 [2024-07-26 23:01:52.609762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:18.546 [2024-07-26 23:01:52.609783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.546 [2024-07-26 23:01:52.609799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:18.546 [2024-07-26 23:01:52.609820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.546 [2024-07-26 23:01:52.609836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:18.546 [2024-07-26 23:01:52.609858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:92432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.546 [2024-07-26 23:01:52.609873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:18.546 [2024-07-26 23:01:52.609894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:92440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.546 [2024-07-26 23:01:52.609911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:18.546 [2024-07-26 23:01:52.609932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:92448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.546 [2024-07-26 23:01:52.609948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:18.546 [2024-07-26 23:01:52.609970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.546 [2024-07-26 23:01:52.609986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:18.546 [2024-07-26 23:01:52.610008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.546 [2024-07-26 23:01:52.610023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:18.546 [2024-07-26 23:01:52.610072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:92472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.546 [2024-07-26 23:01:52.610091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:18.546 [2024-07-26 23:01:52.610123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.546 [2024-07-26 23:01:52.610140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:18.546 [2024-07-26 23:01:52.610163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.546 [2024-07-26 23:01:52.610183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:18.546 [2024-07-26 23:01:52.610206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:92496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.546 [2024-07-26 23:01:52.610223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:18.546 [2024-07-26 23:01:52.610251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:92504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.546 [2024-07-26 23:01:52.610267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:18.546 [2024-07-26 23:01:52.610290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:92512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.546 [2024-07-26 23:01:52.610306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:18.546 [2024-07-26 23:01:52.610328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:92520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.546 [2024-07-26 23:01:52.610344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:18.546 [2024-07-26 23:01:52.610382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.546 [2024-07-26 23:01:52.610397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:18.546 [2024-07-26 23:01:52.610419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:92536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.546 [2024-07-26 23:01:52.610435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:18.546 [2024-07-26 23:01:52.610456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.546 [2024-07-26 23:01:52.610471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:18.546 [2024-07-26 23:01:52.610493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.546 [2024-07-26 23:01:52.610508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:18.546 [2024-07-26 23:01:52.610529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.546 [2024-07-26 23:01:52.610544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:18.546 [2024-07-26 23:01:52.610566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.546 [2024-07-26 23:01:52.610582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:18.546 [2024-07-26 23:01:52.611417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.546 [2024-07-26 23:01:52.611442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:18.546 [2024-07-26 23:01:52.611470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.546 [2024-07-26 23:01:52.611493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:18.546 [2024-07-26 23:01:52.611517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:92592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.546 [2024-07-26 23:01:52.611538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:18.546 [2024-07-26 23:01:52.611562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.546 [2024-07-26 23:01:52.611578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:18.546 [2024-07-26 23:01:52.611601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.546 [2024-07-26 23:01:52.611617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.611639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.611656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.611680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.611696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.611719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.611735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.611757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:92624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.611772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.611794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:92632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.611810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.611832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.611849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.611885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.611902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.611924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.611940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.611962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:92664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.611977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.612003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.612019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.612056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.612084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.612108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.612125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.612148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:92696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.612165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.612186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.612203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.612225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:92712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.612241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.612264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:92720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.612280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.612302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:92728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.612318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.612341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.612373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.612396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.612412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.612433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:92752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.612449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.612470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.612486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.612511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.612528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.612550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:92776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.612566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.612588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:92784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.612604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.612626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.612641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.612663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.612678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.612699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:92808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.612714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.612736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.612751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.612772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.612788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.612809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.612824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.612845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.612861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.612882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:92848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.612897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.612919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:92856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.612934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.612956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:92864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.612975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.612997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:92872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.613013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.613034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.613074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.613098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.613115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:18.547 [2024-07-26 23:01:52.613137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.547 [2024-07-26 23:01:52.613153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:18.548 [2024-07-26 23:01:52.613175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:92904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.548 [2024-07-26 23:01:52.613191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:18.548 [2024-07-26 23:01:52.613214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.548 [2024-07-26 23:01:52.613230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:18.548 [2024-07-26 23:01:52.613252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:92920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.548 [2024-07-26 23:01:52.613267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:18.548 [2024-07-26 23:01:52.613290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.548 [2024-07-26 23:01:52.613307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:18.548 [2024-07-26 23:01:52.613329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.548 [2024-07-26 23:01:52.613345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:18.548 [2024-07-26 23:01:52.613368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.548 [2024-07-26 23:01:52.613384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:18.548 [2024-07-26 23:01:52.613406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:92952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.548 [2024-07-26 23:01:52.613422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:18.548 [2024-07-26 23:01:52.613445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.548 [2024-07-26 23:01:52.613464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:18.548 [2024-07-26 23:01:52.613487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:92968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.548 [2024-07-26 23:01:52.613503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:18.548 [2024-07-26 23:01:52.613525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:92976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.548 [2024-07-26 23:01:52.613541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:18.548 [2024-07-26 23:01:52.613579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:92984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.548 [2024-07-26 23:01:52.613595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:18.548 [2024-07-26 23:01:52.613616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.548 [2024-07-26 23:01:52.613631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:18.548 [2024-07-26 23:01:52.613652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:93000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.548 [2024-07-26 23:01:52.613667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:18.548 [2024-07-26 23:01:52.613689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:93008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.548 [2024-07-26 23:01:52.613705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:18.548 [2024-07-26 23:01:52.614344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:93016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.548 [2024-07-26 23:01:52.614368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:18.548 [2024-07-26 23:01:52.614395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.548 [2024-07-26 23:01:52.614412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:18.548 [2024-07-26 23:01:52.614435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:93032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.548 [2024-07-26 23:01:52.614452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:18.548 [2024-07-26 23:01:52.614474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:93040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.548 [2024-07-26 23:01:52.614490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:18.548 [2024-07-26 23:01:52.614512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:93048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.548 [2024-07-26 23:01:52.614529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:18.548 [2024-07-26 23:01:52.614551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:93056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.548 [2024-07-26 23:01:52.614567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:18.548 [2024-07-26 23:01:52.614599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:93064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.548 [2024-07-26 23:01:52.614616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:18.548 [2024-07-26 23:01:52.614639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:93072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.548 [2024-07-26 23:01:52.614655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:18.548 [2024-07-26 23:01:52.614678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:93080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.548 [2024-07-26 23:01:52.614694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:18.548 [2024-07-26 23:01:52.614716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:93088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.548 [2024-07-26 23:01:52.614732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:18.548 [2024-07-26 23:01:52.614754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:93096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.548 [2024-07-26 23:01:52.614770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:18.548 [2024-07-26 23:01:52.614793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:93104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.548 [2024-07-26 23:01:52.614809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:18.548 [2024-07-26 23:01:52.614847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:93112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.548 [2024-07-26 23:01:52.614868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:18.548 [2024-07-26 23:01:52.614890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:93120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.548 [2024-07-26 23:01:52.614906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:18.548 [2024-07-26 23:01:52.614927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:93128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.548 [2024-07-26 23:01:52.614943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.548 [2024-07-26 23:01:52.614964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:93136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.548 [2024-07-26 23:01:52.614981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.548 [2024-07-26 23:01:52.615002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:93144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.548 [2024-07-26 23:01:52.615017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:18.548 [2024-07-26 23:01:52.615053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:93152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.548 [2024-07-26 23:01:52.615080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:18.548 [2024-07-26 23:01:52.615110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:93160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.548 [2024-07-26 23:01:52.615129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:18.548 [2024-07-26 23:01:52.615152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:93168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.548 [2024-07-26 23:01:52.615168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:18.548 [2024-07-26 23:01:52.615190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:93176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.548 [2024-07-26 23:01:52.615206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:18.548 [2024-07-26 23:01:52.615229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:93184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.548 [2024-07-26 23:01:52.615245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:18.548 [2024-07-26 23:01:52.615267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:93192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.549 [2024-07-26 23:01:52.615283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:18.549 [2024-07-26 23:01:52.615305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:93200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.549 [2024-07-26 23:01:52.615322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:18.549 [2024-07-26 23:01:52.615344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:93208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.549 [2024-07-26 23:01:52.615375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:18.549 [2024-07-26 23:01:52.615398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.549 [2024-07-26 23:01:52.615413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:18.549 [2024-07-26 23:01:52.615435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.549 [2024-07-26 23:01:52.615451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:18.549 [2024-07-26 23:01:52.615472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:92392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.549 [2024-07-26 23:01:52.615487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:18.549 [2024-07-26 23:01:52.615509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.549 [2024-07-26 23:01:52.615529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:18.549 [2024-07-26 23:01:52.615552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.549 [2024-07-26 23:01:52.615568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:18.549 [2024-07-26 23:01:52.615589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:92416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.549 [2024-07-26 23:01:52.615609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:18.549 [2024-07-26 23:01:52.615631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.549 [2024-07-26 23:01:52.615647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:18.549 [2024-07-26 23:01:52.615668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.549 [2024-07-26 23:01:52.615684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:18.549 [2024-07-26 23:01:52.615705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:93224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.549 [2024-07-26 23:01:52.615721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:18.549 [2024-07-26 23:01:52.615742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:93232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.549 [2024-07-26 23:01:52.615758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:18.549 [2024-07-26 23:01:52.615779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.549 [2024-07-26 23:01:52.615795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:18.549 [2024-07-26 23:01:52.615816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.549 [2024-07-26 23:01:52.615832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:18.549 [2024-07-26 23:01:52.615853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.549 [2024-07-26 23:01:52.615868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:18.549 [2024-07-26 23:01:52.615890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.549 [2024-07-26 23:01:52.615905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:18.549 [2024-07-26 23:01:52.615927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.549 [2024-07-26 23:01:52.615944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:18.549 [2024-07-26 23:01:52.615965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.549 [2024-07-26 23:01:52.615981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:18.549 [2024-07-26 23:01:52.616002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.549 [2024-07-26 23:01:52.616018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:18.549 [2024-07-26 23:01:52.616054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.549 [2024-07-26 23:01:52.616082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:18.549 [2024-07-26 23:01:52.616108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.549 [2024-07-26 23:01:52.616125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:18.549 [2024-07-26 23:01:52.616147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.549 [2024-07-26 23:01:52.616164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:18.549 [2024-07-26 23:01:52.616186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.549 [2024-07-26 23:01:52.616202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:18.549 [2024-07-26 23:01:52.616224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.549 [2024-07-26 23:01:52.616240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:18.549 [2024-07-26 23:01:52.616262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.549 [2024-07-26 23:01:52.616278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:18.549 [2024-07-26 23:01:52.616299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.549 [2024-07-26 23:01:52.616315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:18.549 [2024-07-26 23:01:52.616337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.549 [2024-07-26 23:01:52.616368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:18.549 [2024-07-26 23:01:52.616389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.550 [2024-07-26 23:01:52.616405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:18.550 [2024-07-26 23:01:52.616426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.550 [2024-07-26 23:01:52.616441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:18.550 [2024-07-26 23:01:52.616463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.550 [2024-07-26 23:01:52.616478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:18.550 [2024-07-26 23:01:52.616500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:92432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.550 [2024-07-26 23:01:52.616515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:18.550 [2024-07-26 23:01:52.616536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:92440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.550 [2024-07-26 23:01:52.616553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:18.550 [2024-07-26 23:01:52.616578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:92448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.550 [2024-07-26 23:01:52.616594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:18.550 [2024-07-26 23:01:52.616615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.550 [2024-07-26 23:01:52.616631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:18.550 [2024-07-26 23:01:52.616652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.550 [2024-07-26 23:01:52.616667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:18.550 [2024-07-26 23:01:52.616689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:92472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.550 [2024-07-26 23:01:52.616704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:18.550 [2024-07-26 23:01:52.616725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.550 [2024-07-26 23:01:52.616741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:18.550 [2024-07-26 23:01:52.616762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.550 [2024-07-26 23:01:52.616778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:18.550 [2024-07-26 23:01:52.616799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:92496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.550 [2024-07-26 23:01:52.616814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:18.550 [2024-07-26 23:01:52.616835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.550 [2024-07-26 23:01:52.616851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:18.550 [2024-07-26 23:01:52.616872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:92512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.550 [2024-07-26 23:01:52.616888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:18.550 [2024-07-26 23:01:52.616909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:92520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.550 [2024-07-26 23:01:52.616924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:18.550 [2024-07-26 23:01:52.616946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.550 [2024-07-26 23:01:52.616961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:18.550 [2024-07-26 23:01:52.616982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:92536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.550 [2024-07-26 23:01:52.616998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:18.550 [2024-07-26 23:01:52.617023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:92544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.550 [2024-07-26 23:01:52.617039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:18.550 [2024-07-26 23:01:52.617087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.550 [2024-07-26 23:01:52.617103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:18.550 [2024-07-26 23:01:52.617126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.550 [2024-07-26 23:01:52.617142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:18.550 [2024-07-26 23:01:52.617942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.550 [2024-07-26 23:01:52.617965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:18.550 [2024-07-26 23:01:52.617992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:92576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.550 [2024-07-26 23:01:52.618010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:18.550 [2024-07-26 23:01:52.618032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.550 [2024-07-26 23:01:52.618048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:18.550 [2024-07-26 23:01:52.618084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:92592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.550 [2024-07-26 23:01:52.618103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:18.550 [2024-07-26 23:01:52.618126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:93384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.550 [2024-07-26 23:01:52.618143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:18.550 [2024-07-26 23:01:52.618165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.550 [2024-07-26 23:01:52.618181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:18.550 [2024-07-26 23:01:52.618203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.550 [2024-07-26 23:01:52.618219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:18.550 [2024-07-26 23:01:52.618241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:92608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.550 [2024-07-26 23:01:52.618257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:18.550 [2024-07-26 23:01:52.618279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.550 [2024-07-26 23:01:52.618296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:18.550 [2024-07-26 23:01:52.618318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.550 [2024-07-26 23:01:52.618338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:18.550 [2024-07-26 23:01:52.618362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.550 [2024-07-26 23:01:52.618378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:18.550 [2024-07-26 23:01:52.618401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:92640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.550 [2024-07-26 23:01:52.618432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:18.550 [2024-07-26 23:01:52.618454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.550 [2024-07-26 23:01:52.618475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:18.550 [2024-07-26 23:01:52.618498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:92656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.550 [2024-07-26 23:01:52.618514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:18.551 [2024-07-26 23:01:52.618535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.551 [2024-07-26 23:01:52.618551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:18.551 [2024-07-26 23:01:52.618572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.551 [2024-07-26 23:01:52.618588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:18.551 [2024-07-26 23:01:52.618609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.551 [2024-07-26 23:01:52.618624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:18.551 [2024-07-26 23:01:52.618646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.551 [2024-07-26 23:01:52.618662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:18.551 [2024-07-26 23:01:52.618683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:92696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.551 [2024-07-26 23:01:52.618699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:18.551 [2024-07-26 23:01:52.618720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.551 [2024-07-26 23:01:52.618736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:18.551 [2024-07-26 23:01:52.618758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:92712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.551 [2024-07-26 23:01:52.618773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:18.551 [2024-07-26 23:01:52.618795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.551 [2024-07-26 23:01:52.618814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:18.551 [2024-07-26 23:01:52.618837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:92728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.551 [2024-07-26 23:01:52.618853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:18.551 [2024-07-26 23:01:52.618874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:92736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.551 [2024-07-26 23:01:52.618890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:18.551 [2024-07-26 23:01:52.618911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.551 [2024-07-26 23:01:52.618927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:18.551 [2024-07-26 23:01:52.618948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:92752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.551 [2024-07-26 23:01:52.618964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:18.551 [2024-07-26 23:01:52.618986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:92760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.551 [2024-07-26 23:01:52.619001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:18.551 [2024-07-26 23:01:52.619022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.551 [2024-07-26 23:01:52.619038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:18.551 [2024-07-26 23:01:52.619084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:92776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.551 [2024-07-26 23:01:52.619102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:18.551 [2024-07-26 23:01:52.619124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:92784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.551 [2024-07-26 23:01:52.619141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:18.551 [2024-07-26 23:01:52.619163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.551 [2024-07-26 23:01:52.619179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:18.551 [2024-07-26 23:01:52.619201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.551 [2024-07-26 23:01:52.619217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:18.551 [2024-07-26 23:01:52.619239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:92808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.551 [2024-07-26 23:01:52.619255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:18.551 [2024-07-26 23:01:52.619283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.551 [2024-07-26 23:01:52.619300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:18.551 [2024-07-26 23:01:52.619326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.551 [2024-07-26 23:01:52.619343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:18.551 [2024-07-26 23:01:52.619381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.551 [2024-07-26 23:01:52.619396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:18.551 [2024-07-26 23:01:52.619418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:92840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.551 [2024-07-26 23:01:52.619434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:18.551 [2024-07-26 23:01:52.619456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:92848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.551 [2024-07-26 23:01:52.619471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:18.551 [2024-07-26 23:01:52.619493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:92856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.551 [2024-07-26 23:01:52.619508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:18.551 [2024-07-26 23:01:52.619530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:92864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.551 [2024-07-26 23:01:52.619545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:18.551 [2024-07-26 23:01:52.619567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:92872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.551 [2024-07-26 23:01:52.619582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:18.551 [2024-07-26 23:01:52.619604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.551 [2024-07-26 23:01:52.619619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:18.551 [2024-07-26 23:01:52.619640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:92888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.551 [2024-07-26 23:01:52.619656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:18.551 [2024-07-26 23:01:52.619677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:92896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.551 [2024-07-26 23:01:52.619693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:18.551 [2024-07-26 23:01:52.619714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.551 [2024-07-26 23:01:52.619730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:18.551 [2024-07-26 23:01:52.619752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.551 [2024-07-26 23:01:52.619768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:18.551 [2024-07-26 23:01:52.619792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.551 [2024-07-26 23:01:52.619809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:18.551 [2024-07-26 23:01:52.619831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:92928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.551 [2024-07-26 23:01:52.619847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:18.551 [2024-07-26 23:01:52.619868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:92936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.551 [2024-07-26 23:01:52.619884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:18.551 [2024-07-26 23:01:52.619906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:92944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.552 [2024-07-26 23:01:52.619922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:18.552 [2024-07-26 23:01:52.619959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.552 [2024-07-26 23:01:52.619976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:18.552 [2024-07-26 23:01:52.619998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.552 [2024-07-26 23:01:52.620014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:18.552 [2024-07-26 23:01:52.620036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.552 [2024-07-26 23:01:52.620052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:18.552 [2024-07-26 23:01:52.620085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.552 [2024-07-26 23:01:52.620102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:18.552 [2024-07-26 23:01:52.620125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:92984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.552 [2024-07-26 23:01:52.620140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:18.552 [2024-07-26 23:01:52.620162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.552 [2024-07-26 23:01:52.620179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:18.552 [2024-07-26 23:01:52.620202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:93000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.552 [2024-07-26 23:01:52.620220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:18.552 [2024-07-26 23:01:52.620876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:93008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.552 [2024-07-26 23:01:52.620899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:18.552 [2024-07-26 23:01:52.620926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:93016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.552 [2024-07-26 23:01:52.620949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:18.552 [2024-07-26 23:01:52.620972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:93024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.552 [2024-07-26 23:01:52.620989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:18.552 [2024-07-26 23:01:52.621011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.552 [2024-07-26 23:01:52.621027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:18.552 [2024-07-26 23:01:52.621049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:93040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.552 [2024-07-26 23:01:52.621075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:18.552 [2024-07-26 23:01:52.621099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:93048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.552 [2024-07-26 23:01:52.621120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:18.552 [2024-07-26 23:01:52.621144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:93056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.552 [2024-07-26 23:01:52.621160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:18.552 [2024-07-26 23:01:52.621183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:93064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.552 [2024-07-26 23:01:52.621199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:18.552 [2024-07-26 23:01:52.621221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:93072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.552 [2024-07-26 23:01:52.621238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:18.552 [2024-07-26 23:01:52.621261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:93080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.552 [2024-07-26 23:01:52.621277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:18.552 [2024-07-26 23:01:52.621299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:93088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.552 [2024-07-26 23:01:52.621315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:18.552 [2024-07-26 23:01:52.621337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:93096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.552 [2024-07-26 23:01:52.621353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:18.552 [2024-07-26 23:01:52.621375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.552 [2024-07-26 23:01:52.621391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:18.552 [2024-07-26 23:01:52.621413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:93112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.552 [2024-07-26 23:01:52.621449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:18.552 [2024-07-26 23:01:52.621472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:93120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.552 [2024-07-26 23:01:52.621488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:18.552 [2024-07-26 23:01:52.621508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:93128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.552 [2024-07-26 23:01:52.621524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.552 [2024-07-26 23:01:52.621545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:93136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.552 [2024-07-26 23:01:52.621560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.552 [2024-07-26 23:01:52.621581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:93144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.552 [2024-07-26 23:01:52.621597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:18.552 [2024-07-26 23:01:52.621618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:93152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.552 [2024-07-26 23:01:52.621634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:18.552 [2024-07-26 23:01:52.621656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:93160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.552 [2024-07-26 23:01:52.621672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:18.552 [2024-07-26 23:01:52.621694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:93168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.552 [2024-07-26 23:01:52.621709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:18.552 [2024-07-26 23:01:52.621730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:93176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.552 [2024-07-26 23:01:52.621746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:18.552 [2024-07-26 23:01:52.621767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.552 [2024-07-26 23:01:52.621783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:18.552 [2024-07-26 23:01:52.621804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.552 [2024-07-26 23:01:52.621819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:18.552 [2024-07-26 23:01:52.621840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:93200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.552 [2024-07-26 23:01:52.621856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:18.552 [2024-07-26 23:01:52.621877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:93208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.552 [2024-07-26 23:01:52.621892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:18.552 [2024-07-26 23:01:52.621918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:92376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.552 [2024-07-26 23:01:52.621934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:18.552 [2024-07-26 23:01:52.621955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.553 [2024-07-26 23:01:52.621971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:18.553 [2024-07-26 23:01:52.621992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.553 [2024-07-26 23:01:52.622008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:18.553 [2024-07-26 23:01:52.622030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.553 [2024-07-26 23:01:52.622068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:18.553 [2024-07-26 23:01:52.622095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.553 [2024-07-26 23:01:52.622117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:18.553 [2024-07-26 23:01:52.622140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:92416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.553 [2024-07-26 23:01:52.622157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:18.553 [2024-07-26 23:01:52.622179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.553 [2024-07-26 23:01:52.622195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:18.553 [2024-07-26 23:01:52.622217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.553 [2024-07-26 23:01:52.622233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:18.553 [2024-07-26 23:01:52.622255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:93224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.553 [2024-07-26 23:01:52.622271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:18.553 [2024-07-26 23:01:52.622293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:93232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.553 [2024-07-26 23:01:52.622310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:18.553 [2024-07-26 23:01:52.622332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.553 [2024-07-26 23:01:52.622348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:18.553 [2024-07-26 23:01:52.622386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.553 [2024-07-26 23:01:52.622403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:18.553 [2024-07-26 23:01:52.622429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.553 [2024-07-26 23:01:52.622445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:18.553 [2024-07-26 23:01:52.622466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.553 [2024-07-26 23:01:52.622482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:18.553 [2024-07-26 23:01:52.622503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.553 [2024-07-26 23:01:52.622519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:18.553 [2024-07-26 23:01:52.622540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.553 [2024-07-26 23:01:52.622556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:18.553 [2024-07-26 23:01:52.622577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.553 [2024-07-26 23:01:52.622593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:18.553 [2024-07-26 23:01:52.622614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.553 [2024-07-26 23:01:52.622629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:18.553 [2024-07-26 23:01:52.622650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.553 [2024-07-26 23:01:52.622666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:18.553 [2024-07-26 23:01:52.622687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.553 [2024-07-26 23:01:52.622703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:18.553 [2024-07-26 23:01:52.622724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.553 [2024-07-26 23:01:52.622745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:18.553 [2024-07-26 23:01:52.622767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.553 [2024-07-26 23:01:52.622783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:18.553 [2024-07-26 23:01:52.622805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.553 [2024-07-26 23:01:52.622821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:18.553 [2024-07-26 23:01:52.622842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.553 [2024-07-26 23:01:52.622858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:18.553 [2024-07-26 23:01:52.622880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.553 [2024-07-26 23:01:52.622899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:18.553 [2024-07-26 23:01:52.622921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.553 [2024-07-26 23:01:52.622936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:18.553 [2024-07-26 23:01:52.622958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.553 [2024-07-26 23:01:52.622973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:18.553 [2024-07-26 23:01:52.622995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.553 [2024-07-26 23:01:52.623010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:18.553 [2024-07-26 23:01:52.623032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:92432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.553 [2024-07-26 23:01:52.623070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:18.553 [2024-07-26 23:01:52.623095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.553 [2024-07-26 23:01:52.623111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:18.553 [2024-07-26 23:01:52.623139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.553 [2024-07-26 23:01:52.623156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:18.553 [2024-07-26 23:01:52.623178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.553 [2024-07-26 23:01:52.623194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:18.553 [2024-07-26 23:01:52.623216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.553 [2024-07-26 23:01:52.623232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:18.553 [2024-07-26 23:01:52.623255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:92472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.553 [2024-07-26 23:01:52.623271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:18.553 [2024-07-26 23:01:52.623293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.553 [2024-07-26 23:01:52.623309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:18.553 [2024-07-26 23:01:52.623331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.553 [2024-07-26 23:01:52.623347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:18.553 [2024-07-26 23:01:52.623385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:92496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.553 [2024-07-26 23:01:52.623409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:18.553 [2024-07-26 23:01:52.629841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:92504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.553 [2024-07-26 23:01:52.629874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:18.553 [2024-07-26 23:01:52.629899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.553 [2024-07-26 23:01:52.629916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:18.554 [2024-07-26 23:01:52.629938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:92520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.554 [2024-07-26 23:01:52.629954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:18.554 [2024-07-26 23:01:52.629976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.554 [2024-07-26 23:01:52.629991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:18.554 [2024-07-26 23:01:52.630013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.554 [2024-07-26 23:01:52.630028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:18.554 [2024-07-26 23:01:52.630077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:92544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.554 [2024-07-26 23:01:52.630096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:18.554 [2024-07-26 23:01:52.630119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.554 [2024-07-26 23:01:52.630136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:18.554 [2024-07-26 23:01:52.630977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.554 [2024-07-26 23:01:52.631002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:18.554 [2024-07-26 23:01:52.631030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.554 [2024-07-26 23:01:52.631048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:18.554 [2024-07-26 23:01:52.631083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:92576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.554 [2024-07-26 23:01:52.631102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:18.554 [2024-07-26 23:01:52.631133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.554 [2024-07-26 23:01:52.631149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:18.554 [2024-07-26 23:01:52.631173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.554 [2024-07-26 23:01:52.631189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:18.554 [2024-07-26 23:01:52.631220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:93384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.554 [2024-07-26 23:01:52.631237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:18.554 [2024-07-26 23:01:52.631259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.554 [2024-07-26 23:01:52.631276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:18.554 [2024-07-26 23:01:52.631298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:92600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.554 [2024-07-26 23:01:52.631314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:18.554 [2024-07-26 23:01:52.631336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:92608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.554 [2024-07-26 23:01:52.631352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:18.554 [2024-07-26 23:01:52.631374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.554 [2024-07-26 23:01:52.631391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:18.554 [2024-07-26 23:01:52.631413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.554 [2024-07-26 23:01:52.631428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:18.554 [2024-07-26 23:01:52.631465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.554 [2024-07-26 23:01:52.631480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:18.554 [2024-07-26 23:01:52.631502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.554 [2024-07-26 23:01:52.631517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:18.554 [2024-07-26 23:01:52.631539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:92648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.554 [2024-07-26 23:01:52.631554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:18.554 [2024-07-26 23:01:52.631577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:92656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.554 [2024-07-26 23:01:52.631592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:18.554 [2024-07-26 23:01:52.631613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.554 [2024-07-26 23:01:52.631629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:18.554 [2024-07-26 23:01:52.631650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.554 [2024-07-26 23:01:52.631665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:18.554 [2024-07-26 23:01:52.631691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.554 [2024-07-26 23:01:52.631706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:18.554 [2024-07-26 23:01:52.631728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.554 [2024-07-26 23:01:52.631743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:18.554 [2024-07-26 23:01:52.631765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:92696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.554 [2024-07-26 23:01:52.631781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:18.554 [2024-07-26 23:01:52.631802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.554 [2024-07-26 23:01:52.631817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.631838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:92712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.555 [2024-07-26 23:01:52.631853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.631876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.555 [2024-07-26 23:01:52.631893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.631914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:92728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.555 [2024-07-26 23:01:52.631929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.631950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:92736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.555 [2024-07-26 23:01:52.631966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.631987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.555 [2024-07-26 23:01:52.632003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.632024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.555 [2024-07-26 23:01:52.632054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.632089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:92760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.555 [2024-07-26 23:01:52.632107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.632129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:92768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.555 [2024-07-26 23:01:52.632145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.632167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.555 [2024-07-26 23:01:52.632187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.632210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:92784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.555 [2024-07-26 23:01:52.632227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.632249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:92792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.555 [2024-07-26 23:01:52.632265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.632288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.555 [2024-07-26 23:01:52.632304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.632326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:92808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.555 [2024-07-26 23:01:52.632342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.632380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.555 [2024-07-26 23:01:52.632395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.632417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.555 [2024-07-26 23:01:52.632433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.632454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.555 [2024-07-26 23:01:52.632470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.632491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:92840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.555 [2024-07-26 23:01:52.632506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.632527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:92848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.555 [2024-07-26 23:01:52.632543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.632564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:92856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.555 [2024-07-26 23:01:52.632579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.632600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.555 [2024-07-26 23:01:52.632616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.632637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.555 [2024-07-26 23:01:52.632656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.632679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.555 [2024-07-26 23:01:52.632695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.632717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:92888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.555 [2024-07-26 23:01:52.632732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.632753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.555 [2024-07-26 23:01:52.632769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.632790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:92904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.555 [2024-07-26 23:01:52.632821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.632844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.555 [2024-07-26 23:01:52.632861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.632883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.555 [2024-07-26 23:01:52.632898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.632920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.555 [2024-07-26 23:01:52.632936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.632958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:92936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.555 [2024-07-26 23:01:52.632974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.632997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.555 [2024-07-26 23:01:52.633013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.633034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:92952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.555 [2024-07-26 23:01:52.633050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.633080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:92960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.555 [2024-07-26 23:01:52.633097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.633119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:92968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.555 [2024-07-26 23:01:52.633135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.633161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.555 [2024-07-26 23:01:52.633178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.633200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:92984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.555 [2024-07-26 23:01:52.633216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.633238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.555 [2024-07-26 23:01:52.633255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.633903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:93000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.555 [2024-07-26 23:01:52.633926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:18.555 [2024-07-26 23:01:52.633953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.556 [2024-07-26 23:01:52.633971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:18.556 [2024-07-26 23:01:52.633994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:93016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.556 [2024-07-26 23:01:52.634010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:18.556 [2024-07-26 23:01:52.634033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:93024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.556 [2024-07-26 23:01:52.634050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:18.556 [2024-07-26 23:01:52.634083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:93032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.556 [2024-07-26 23:01:52.634101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:18.556 [2024-07-26 23:01:52.634123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:93040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.556 [2024-07-26 23:01:52.634139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:18.556 [2024-07-26 23:01:52.634166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:93048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.556 [2024-07-26 23:01:52.634183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:18.556 [2024-07-26 23:01:52.634205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:93056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.556 [2024-07-26 23:01:52.634221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:18.556 [2024-07-26 23:01:52.634243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:93064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.556 [2024-07-26 23:01:52.634258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:18.556 [2024-07-26 23:01:52.634285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:93072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.556 [2024-07-26 23:01:52.634302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:18.556 [2024-07-26 23:01:52.634325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:93080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.556 [2024-07-26 23:01:52.634341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:18.556 [2024-07-26 23:01:52.634379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:93088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.556 [2024-07-26 23:01:52.634394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:18.556 [2024-07-26 23:01:52.634416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:93096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.556 [2024-07-26 23:01:52.634431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:18.556 [2024-07-26 23:01:52.634452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:93104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.556 [2024-07-26 23:01:52.634468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:18.556 [2024-07-26 23:01:52.634489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:93112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.556 [2024-07-26 23:01:52.634504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:18.556 [2024-07-26 23:01:52.634526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:93120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.556 [2024-07-26 23:01:52.634541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:18.556 [2024-07-26 23:01:52.634562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:93128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.556 [2024-07-26 23:01:52.634577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.556 [2024-07-26 23:01:52.634598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:93136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.556 [2024-07-26 23:01:52.634614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.556 [2024-07-26 23:01:52.634635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:93144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.556 [2024-07-26 23:01:52.634650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:18.556 [2024-07-26 23:01:52.634671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:93152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.556 [2024-07-26 23:01:52.634686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:18.556 [2024-07-26 23:01:52.634708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:93160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.556 [2024-07-26 23:01:52.634723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:18.556 [2024-07-26 23:01:52.634744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:93168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.556 [2024-07-26 23:01:52.634764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:18.556 [2024-07-26 23:01:52.634786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:93176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.556 [2024-07-26 23:01:52.634802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:18.556 [2024-07-26 23:01:52.634824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:93184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.556 [2024-07-26 23:01:52.634839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:18.556 [2024-07-26 23:01:52.634860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:93192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.556 [2024-07-26 23:01:52.634875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:18.556 [2024-07-26 23:01:52.634897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.556 [2024-07-26 23:01:52.634913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:18.556 [2024-07-26 23:01:52.634934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:93208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.556 [2024-07-26 23:01:52.634949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:18.556 [2024-07-26 23:01:52.634970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:92376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.556 [2024-07-26 23:01:52.634986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:18.556 [2024-07-26 23:01:52.635007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.556 [2024-07-26 23:01:52.635023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:18.556 [2024-07-26 23:01:52.635067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:92392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.556 [2024-07-26 23:01:52.635085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:18.556 [2024-07-26 23:01:52.635109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:92400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.556 [2024-07-26 23:01:52.635125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:18.556 [2024-07-26 23:01:52.635151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.556 [2024-07-26 23:01:52.635169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:18.556 [2024-07-26 23:01:52.635192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:92416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.556 [2024-07-26 23:01:52.635208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:18.556 [2024-07-26 23:01:52.635230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.556 [2024-07-26 23:01:52.635250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:18.556 [2024-07-26 23:01:52.635273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.556 [2024-07-26 23:01:52.635289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:18.556 [2024-07-26 23:01:52.635311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:93224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.556 [2024-07-26 23:01:52.635327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:18.556 [2024-07-26 23:01:52.635349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:93232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.556 [2024-07-26 23:01:52.635366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.635388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.635404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.635426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.635442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.635464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.635480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.635502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.635518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.635541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.635572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.635594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.635610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.635631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.635647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.635669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.635684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.635705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.635721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.635746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.635762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.635783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.635798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.635820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.635835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.635857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.635873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.635894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.635909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.635930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.635946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.635967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.635982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.636003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.636019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.636056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.636081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.636104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:92432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.636120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.636142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:92440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.636158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.636181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:92448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.636197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.636223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.636239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.636261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.636277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.636299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.636315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.636337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.636352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.636375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.636405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.636427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:92496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.636442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.636463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:92504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.636479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.636501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.636516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.636537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:92520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.636552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.636573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:92528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.636589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.636610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:92536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.636625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.636646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:92544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.636662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.637443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.637479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.637508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.637526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.637549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.637565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.637593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:92576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.637611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:18.557 [2024-07-26 23:01:52.637633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.557 [2024-07-26 23:01:52.637649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:18.558 [2024-07-26 23:01:52.637671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:92592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.558 [2024-07-26 23:01:52.637688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:18.558 [2024-07-26 23:01:52.637709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:93384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.558 [2024-07-26 23:01:52.637725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:18.558 [2024-07-26 23:01:52.637748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.558 [2024-07-26 23:01:52.637764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:18.558 [2024-07-26 23:01:52.637786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.558 [2024-07-26 23:01:52.637802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:18.558 [2024-07-26 23:01:52.637824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.558 [2024-07-26 23:01:52.637843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:18.558 [2024-07-26 23:01:52.637866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.558 [2024-07-26 23:01:52.637883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:18.558 [2024-07-26 23:01:52.637905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:92624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.558 [2024-07-26 23:01:52.637922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:18.558 [2024-07-26 23:01:52.637944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.558 [2024-07-26 23:01:52.637964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:18.558 [2024-07-26 23:01:52.638003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:92640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.558 [2024-07-26 23:01:52.638019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:18.558 [2024-07-26 23:01:52.638041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.558 [2024-07-26 23:01:52.638082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:18.558 [2024-07-26 23:01:52.638106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.558 [2024-07-26 23:01:52.638122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:18.558 [2024-07-26 23:01:52.638144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.558 [2024-07-26 23:01:52.638160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:18.558 [2024-07-26 23:01:52.638182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.558 [2024-07-26 23:01:52.638198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:18.558 [2024-07-26 23:01:52.638220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.558 [2024-07-26 23:01:52.638236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:18.558 [2024-07-26 23:01:52.638264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.558 [2024-07-26 23:01:52.638281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:18.558 [2024-07-26 23:01:52.638303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:92696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.558 [2024-07-26 23:01:52.638319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:18.558 [2024-07-26 23:01:52.638341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.558 [2024-07-26 23:01:52.638372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:18.558 [2024-07-26 23:01:52.638394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:92712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.558 [2024-07-26 23:01:52.638410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:18.558 [2024-07-26 23:01:52.638431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:92720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.558 [2024-07-26 23:01:52.638447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:18.558 [2024-07-26 23:01:52.638468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.558 [2024-07-26 23:01:52.638484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:18.558 [2024-07-26 23:01:52.638510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:92736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.558 [2024-07-26 23:01:52.638526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:18.558 [2024-07-26 23:01:52.638547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.558 [2024-07-26 23:01:52.638563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:18.558 [2024-07-26 23:01:52.638585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.558 [2024-07-26 23:01:52.638601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:18.558 [2024-07-26 23:01:52.638622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:92760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.558 [2024-07-26 23:01:52.638637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:18.558 [2024-07-26 23:01:52.638658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:92768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.558 [2024-07-26 23:01:52.638674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:18.558 [2024-07-26 23:01:52.638695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.558 [2024-07-26 23:01:52.638711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:18.558 [2024-07-26 23:01:52.638732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.558 [2024-07-26 23:01:52.638748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:18.558 [2024-07-26 23:01:52.638769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:92792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.558 [2024-07-26 23:01:52.638785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:18.558 [2024-07-26 23:01:52.638806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.558 [2024-07-26 23:01:52.638822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:18.558 [2024-07-26 23:01:52.638843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.558 [2024-07-26 23:01:52.638858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:18.558 [2024-07-26 23:01:52.638880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.558 [2024-07-26 23:01:52.638896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:18.558 [2024-07-26 23:01:52.638917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.558 [2024-07-26 23:01:52.638933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:18.558 [2024-07-26 23:01:52.638959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.558 [2024-07-26 23:01:52.638976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:18.558 [2024-07-26 23:01:52.638997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:92840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.558 [2024-07-26 23:01:52.639012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:18.559 [2024-07-26 23:01:52.639033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:92848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.559 [2024-07-26 23:01:52.639049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:18.559 [2024-07-26 23:01:52.639096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:92856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.559 [2024-07-26 23:01:52.639115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:18.559 [2024-07-26 23:01:52.639138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.559 [2024-07-26 23:01:52.639153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:18.559 [2024-07-26 23:01:52.639175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:92872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.559 [2024-07-26 23:01:52.639191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:18.559 [2024-07-26 23:01:52.639214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:92880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.559 [2024-07-26 23:01:52.639229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:18.559 [2024-07-26 23:01:52.639251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.559 [2024-07-26 23:01:52.639267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:18.559 [2024-07-26 23:01:52.639289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.559 [2024-07-26 23:01:52.639305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:18.559 [2024-07-26 23:01:52.639327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.559 [2024-07-26 23:01:52.639342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:18.559 [2024-07-26 23:01:52.639364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:92912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.559 [2024-07-26 23:01:52.639395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:18.559 [2024-07-26 23:01:52.639418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:92920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.559 [2024-07-26 23:01:52.639433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:18.559 [2024-07-26 23:01:52.639454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:92928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.559 [2024-07-26 23:01:52.639488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:18.559 [2024-07-26 23:01:52.639513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.559 [2024-07-26 23:01:52.639529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:18.559 [2024-07-26 23:01:52.639551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.559 [2024-07-26 23:01:52.639568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:18.559 [2024-07-26 23:01:52.639589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.559 [2024-07-26 23:01:52.639605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:18.559 [2024-07-26 23:01:52.639627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.559 [2024-07-26 23:01:52.639643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:18.559 [2024-07-26 23:01:52.639666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:92968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.559 [2024-07-26 23:01:52.639681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:18.559 [2024-07-26 23:01:52.639703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.559 [2024-07-26 23:01:52.639719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:18.559 [2024-07-26 23:01:52.639742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.559 [2024-07-26 23:01:52.639758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:18.559 [2024-07-26 23:01:52.640410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:92992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.559 [2024-07-26 23:01:52.640433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:18.559 [2024-07-26 23:01:52.640459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:93000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.559 [2024-07-26 23:01:52.640477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:18.559 [2024-07-26 23:01:52.640500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:93008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.559 [2024-07-26 23:01:52.640516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:18.559 [2024-07-26 23:01:52.640538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.559 [2024-07-26 23:01:52.640554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:18.559 [2024-07-26 23:01:52.640576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:93024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.559 [2024-07-26 23:01:52.640599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:18.559 [2024-07-26 23:01:52.640622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:93032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.559 [2024-07-26 23:01:52.640639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:18.559 [2024-07-26 23:01:52.640666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:93040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.559 [2024-07-26 23:01:52.640682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:18.559 [2024-07-26 23:01:52.640705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:93048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.559 [2024-07-26 23:01:52.640721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:18.559 [2024-07-26 23:01:52.640743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:93056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.559 [2024-07-26 23:01:52.640758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:18.559 [2024-07-26 23:01:52.640780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:93064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.559 [2024-07-26 23:01:52.640797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:18.560 [2024-07-26 23:01:52.640819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:93072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.560 [2024-07-26 23:01:52.640834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:18.560 [2024-07-26 23:01:52.640856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:93080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.560 [2024-07-26 23:01:52.640872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:18.560 [2024-07-26 23:01:52.640894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.560 [2024-07-26 23:01:52.640910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:18.560 [2024-07-26 23:01:52.640932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:93096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.560 [2024-07-26 23:01:52.640948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:18.560 [2024-07-26 23:01:52.640970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:93104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.560 [2024-07-26 23:01:52.640986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:18.560 [2024-07-26 23:01:52.641008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:93112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.560 [2024-07-26 23:01:52.641024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:18.560 [2024-07-26 23:01:52.641046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:93120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.560 [2024-07-26 23:01:52.641077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:18.560 [2024-07-26 23:01:52.641106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:93128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.560 [2024-07-26 23:01:52.641124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.560 [2024-07-26 23:01:52.641146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:93136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.560 [2024-07-26 23:01:52.641162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.560 [2024-07-26 23:01:52.641184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:93144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.560 [2024-07-26 23:01:52.641200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:18.560 [2024-07-26 23:01:52.641222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:93152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.560 [2024-07-26 23:01:52.641239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:18.560 [2024-07-26 23:01:52.641261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:93160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.560 [2024-07-26 23:01:52.641276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:18.560 [2024-07-26 23:01:52.641298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.560 [2024-07-26 23:01:52.641314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:18.560 [2024-07-26 23:01:52.641336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.560 [2024-07-26 23:01:52.641352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:18.560 [2024-07-26 23:01:52.641390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:93184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.560 [2024-07-26 23:01:52.641406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:18.560 [2024-07-26 23:01:52.641428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:93192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.560 [2024-07-26 23:01:52.641443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:18.560 [2024-07-26 23:01:52.641465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:93200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.560 [2024-07-26 23:01:52.641480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:18.560 [2024-07-26 23:01:52.641502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:93208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.560 [2024-07-26 23:01:52.641517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:18.560 [2024-07-26 23:01:52.641538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.560 [2024-07-26 23:01:52.641554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:18.560 [2024-07-26 23:01:52.641579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.560 [2024-07-26 23:01:52.641595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:18.560 [2024-07-26 23:01:52.641617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.560 [2024-07-26 23:01:52.641633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:18.560 [2024-07-26 23:01:52.641654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:92400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.560 [2024-07-26 23:01:52.641670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:18.560 [2024-07-26 23:01:52.641691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.560 [2024-07-26 23:01:52.641706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:18.560 [2024-07-26 23:01:52.641728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:92416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.560 [2024-07-26 23:01:52.641743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:18.560 [2024-07-26 23:01:52.641764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.560 [2024-07-26 23:01:52.641780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:18.560 [2024-07-26 23:01:52.641801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.560 [2024-07-26 23:01:52.641816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:18.560 [2024-07-26 23:01:52.641838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:93224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.560 [2024-07-26 23:01:52.641854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:18.560 [2024-07-26 23:01:52.641875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.560 [2024-07-26 23:01:52.641890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:18.560 [2024-07-26 23:01:52.641911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.560 [2024-07-26 23:01:52.641927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:18.560 [2024-07-26 23:01:52.641948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.560 [2024-07-26 23:01:52.641963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:18.560 [2024-07-26 23:01:52.641985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.560 [2024-07-26 23:01:52.642000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:18.560 [2024-07-26 23:01:52.642022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.560 [2024-07-26 23:01:52.642042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:18.560 [2024-07-26 23:01:52.642093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.560 [2024-07-26 23:01:52.642112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:18.560 [2024-07-26 23:01:52.642134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.560 [2024-07-26 23:01:52.642151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:18.560 [2024-07-26 23:01:52.642173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.560 [2024-07-26 23:01:52.642189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:18.560 [2024-07-26 23:01:52.642211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.560 [2024-07-26 23:01:52.642227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:18.560 [2024-07-26 23:01:52.642250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.561 [2024-07-26 23:01:52.642266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:18.561 [2024-07-26 23:01:52.642288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.561 [2024-07-26 23:01:52.642305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:18.561 [2024-07-26 23:01:52.642327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.561 [2024-07-26 23:01:52.642343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:18.561 [2024-07-26 23:01:52.642365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.561 [2024-07-26 23:01:52.642396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:18.561 [2024-07-26 23:01:52.642419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.561 [2024-07-26 23:01:52.642435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:18.561 [2024-07-26 23:01:52.642456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.561 [2024-07-26 23:01:52.642472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:18.561 [2024-07-26 23:01:52.642499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.561 [2024-07-26 23:01:52.642516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:18.561 [2024-07-26 23:01:52.642537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.561 [2024-07-26 23:01:52.642557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:18.561 [2024-07-26 23:01:52.642580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.561 [2024-07-26 23:01:52.642596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:18.561 [2024-07-26 23:01:52.642617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.561 [2024-07-26 23:01:52.642633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:18.561 [2024-07-26 23:01:52.642654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.561 [2024-07-26 23:01:52.642671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:18.561 [2024-07-26 23:01:52.642692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:92440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.561 [2024-07-26 23:01:52.642708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:18.561 [2024-07-26 23:01:52.642735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:92448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.561 [2024-07-26 23:01:52.642751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:18.561 [2024-07-26 23:01:52.642773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.561 [2024-07-26 23:01:52.642788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:18.561 [2024-07-26 23:01:52.642809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.561 [2024-07-26 23:01:52.642825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:18.561 [2024-07-26 23:01:52.642847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:92472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.561 [2024-07-26 23:01:52.642862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:18.561 [2024-07-26 23:01:52.642884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.561 [2024-07-26 23:01:52.642900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:18.561 [2024-07-26 23:01:52.642921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.561 [2024-07-26 23:01:52.642937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:18.561 [2024-07-26 23:01:52.642958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.561 [2024-07-26 23:01:52.642974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:18.561 [2024-07-26 23:01:52.642995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:92504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.561 [2024-07-26 23:01:52.643010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:18.561 [2024-07-26 23:01:52.643036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.561 [2024-07-26 23:01:52.643052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:18.561 [2024-07-26 23:01:52.643099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.561 [2024-07-26 23:01:52.643116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:18.561 [2024-07-26 23:01:52.643144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:92528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.561 [2024-07-26 23:01:52.643160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:18.561 [2024-07-26 23:01:52.643183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:92536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.561 [2024-07-26 23:01:52.643200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:18.561 [2024-07-26 23:01:52.643496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.561 [2024-07-26 23:01:52.643519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:18.561 [2024-07-26 23:01:52.643575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.561 [2024-07-26 23:01:52.643597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:18.561 [2024-07-26 23:01:52.643626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.561 [2024-07-26 23:01:52.643643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:18.561 [2024-07-26 23:01:52.643670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.561 [2024-07-26 23:01:52.643687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:18.561 [2024-07-26 23:01:52.643715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.561 [2024-07-26 23:01:52.643732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:18.561 [2024-07-26 23:01:52.643760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.561 [2024-07-26 23:01:52.643777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:18.561 [2024-07-26 23:01:52.643804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.561 [2024-07-26 23:01:52.643835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:18.561 [2024-07-26 23:01:52.643863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:93384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.561 [2024-07-26 23:01:52.643880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:18.561 [2024-07-26 23:01:52.643911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.561 [2024-07-26 23:01:52.643928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:18.561 [2024-07-26 23:01:52.643955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:92600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.561 [2024-07-26 23:01:52.643971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:18.561 [2024-07-26 23:01:52.643998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.561 [2024-07-26 23:01:52.644014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:18.561 [2024-07-26 23:01:52.644056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.561 [2024-07-26 23:01:52.644082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:18.561 [2024-07-26 23:01:52.644111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.561 [2024-07-26 23:01:52.644129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:18.561 [2024-07-26 23:01:52.644156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:92632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.561 [2024-07-26 23:01:52.644173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:18.562 [2024-07-26 23:01:52.644201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:92640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.562 [2024-07-26 23:01:52.644218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:18.562 [2024-07-26 23:01:52.644245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.562 [2024-07-26 23:01:52.644262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:18.562 [2024-07-26 23:01:52.644289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:92656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.562 [2024-07-26 23:01:52.644305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:18.562 [2024-07-26 23:01:52.644333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:92664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.562 [2024-07-26 23:01:52.644349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:18.562 [2024-07-26 23:01:52.644377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.562 [2024-07-26 23:01:52.644408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:18.562 [2024-07-26 23:01:52.644436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.562 [2024-07-26 23:01:52.644452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:18.562 [2024-07-26 23:01:52.644478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.562 [2024-07-26 23:01:52.644498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:18.562 [2024-07-26 23:01:52.644526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:92696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.562 [2024-07-26 23:01:52.644542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:18.562 [2024-07-26 23:01:52.644568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.562 [2024-07-26 23:01:52.644584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:18.562 [2024-07-26 23:01:52.644611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:92712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.562 [2024-07-26 23:01:52.644627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:18.562 [2024-07-26 23:01:52.644653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:92720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.562 [2024-07-26 23:01:52.644669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:18.562 [2024-07-26 23:01:52.644696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.562 [2024-07-26 23:01:52.644711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:18.562 [2024-07-26 23:01:52.644738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.562 [2024-07-26 23:01:52.644754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:18.562 [2024-07-26 23:01:52.644780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.562 [2024-07-26 23:01:52.644796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:18.562 [2024-07-26 23:01:52.644823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:92752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.562 [2024-07-26 23:01:52.644839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:18.562 [2024-07-26 23:01:52.644865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.562 [2024-07-26 23:01:52.644881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:18.562 [2024-07-26 23:01:52.644907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:92768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.562 [2024-07-26 23:01:52.644924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:18.562 [2024-07-26 23:01:52.644950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:92776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.562 [2024-07-26 23:01:52.644966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:18.562 [2024-07-26 23:01:52.644993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.562 [2024-07-26 23:01:52.645012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:18.562 [2024-07-26 23:01:52.645054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:92792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.562 [2024-07-26 23:01:52.645079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:18.562 [2024-07-26 23:01:52.645109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.562 [2024-07-26 23:01:52.645125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:18.562 [2024-07-26 23:01:52.645153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.562 [2024-07-26 23:01:52.645169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:18.562 [2024-07-26 23:01:52.645197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.562 [2024-07-26 23:01:52.645214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:18.562 [2024-07-26 23:01:52.645241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.562 [2024-07-26 23:01:52.645258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:18.562 [2024-07-26 23:01:52.645285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.562 [2024-07-26 23:01:52.645302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:18.562 [2024-07-26 23:01:52.645329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:92840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.562 [2024-07-26 23:01:52.645346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:18.562 [2024-07-26 23:01:52.645389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.562 [2024-07-26 23:01:52.645405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:18.562 [2024-07-26 23:01:52.645432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.562 [2024-07-26 23:01:52.645447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:18.562 [2024-07-26 23:01:52.645474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.562 [2024-07-26 23:01:52.645490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:18.562 [2024-07-26 23:01:52.645517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:92872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.562 [2024-07-26 23:01:52.645533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:18.562 [2024-07-26 23:01:52.645559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.562 [2024-07-26 23:01:52.645575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:18.562 [2024-07-26 23:01:52.645606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:92888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.562 [2024-07-26 23:01:52.645623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:18.562 [2024-07-26 23:01:52.645649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.562 [2024-07-26 23:01:52.645665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:18.562 [2024-07-26 23:01:52.645692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.562 [2024-07-26 23:01:52.645708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:18.562 [2024-07-26 23:01:52.645734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.562 [2024-07-26 23:01:52.645750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:18.562 [2024-07-26 23:01:52.645777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:92920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.562 [2024-07-26 23:01:52.645793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:18.563 [2024-07-26 23:01:52.645819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.563 [2024-07-26 23:01:52.645835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:18.563 [2024-07-26 23:01:52.645862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:92936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.563 [2024-07-26 23:01:52.645878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:18.563 [2024-07-26 23:01:52.645904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:92944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.563 [2024-07-26 23:01:52.645920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:18.563 [2024-07-26 23:01:52.645946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:92952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.563 [2024-07-26 23:01:52.645962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:18.563 [2024-07-26 23:01:52.645989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.563 [2024-07-26 23:01:52.646005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:18.563 [2024-07-26 23:01:52.646031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:92968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.563 [2024-07-26 23:01:52.646071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:18.563 [2024-07-26 23:01:52.646102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.563 [2024-07-26 23:01:52.646119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:18.563 [2024-07-26 23:01:52.646278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:92984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.563 [2024-07-26 23:01:52.646299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:18.563 [2024-07-26 23:02:08.096588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:67048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.563 [2024-07-26 23:02:08.096656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:18.563 [2024-07-26 23:02:08.096723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:67064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.563 [2024-07-26 23:02:08.096745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:18.563 [2024-07-26 23:02:08.096771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:67080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.563 [2024-07-26 23:02:08.096788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:18.563 [2024-07-26 23:02:08.096811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:67096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.563 [2024-07-26 23:02:08.096828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:18.563 [2024-07-26 23:02:08.096850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:67112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.563 [2024-07-26 23:02:08.096867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:18.563 [2024-07-26 23:02:08.096889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:67128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.563 [2024-07-26 23:02:08.096905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:18.563 [2024-07-26 23:02:08.096928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:67144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.563 [2024-07-26 23:02:08.096944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:18.563 [2024-07-26 23:02:08.096966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:67160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.563 [2024-07-26 23:02:08.096983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:18.563 [2024-07-26 23:02:08.097005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:67176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.563 [2024-07-26 23:02:08.097021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.563 [2024-07-26 23:02:08.097044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:67192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.563 [2024-07-26 23:02:08.097085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.563 [2024-07-26 23:02:08.097109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.563 [2024-07-26 23:02:08.097141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:18.563 [2024-07-26 23:02:08.097164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:67224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.563 [2024-07-26 23:02:08.097193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:18.563 [2024-07-26 23:02:08.097216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:67240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.563 [2024-07-26 23:02:08.097233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:18.563 [2024-07-26 23:02:08.097254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:67256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.563 [2024-07-26 23:02:08.097271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:18.563 [2024-07-26 23:02:08.097293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:67272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.563 [2024-07-26 23:02:08.097310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:18.563 [2024-07-26 23:02:08.097332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:67288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.563 [2024-07-26 23:02:08.097348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:18.563 [2024-07-26 23:02:08.097370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:67304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.563 [2024-07-26 23:02:08.097386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:18.563 [2024-07-26 23:02:08.097408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:67320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.563 [2024-07-26 23:02:08.097439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:18.563 [2024-07-26 23:02:08.097462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:67336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.563 [2024-07-26 23:02:08.097477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:18.563 [2024-07-26 23:02:08.097500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.563 [2024-07-26 23:02:08.097516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:18.563 [2024-07-26 23:02:08.097538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:67368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.563 [2024-07-26 23:02:08.097554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:18.563 [2024-07-26 23:02:08.097575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:67384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.563 [2024-07-26 23:02:08.097590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:18.563 [2024-07-26 23:02:08.097633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:67400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.563 [2024-07-26 23:02:08.097652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:18.563 [2024-07-26 23:02:08.097675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:67416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.563 [2024-07-26 23:02:08.097695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:18.563 [2024-07-26 23:02:08.097719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:67432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.563 [2024-07-26 23:02:08.097735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:18.563 [2024-07-26 23:02:08.097758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:67448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.563 [2024-07-26 23:02:08.097774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:18.563 [2024-07-26 23:02:08.097796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:67464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.563 [2024-07-26 23:02:08.097812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:18.563 [2024-07-26 23:02:08.097834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:67480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.563 [2024-07-26 23:02:08.097850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:18.563 [2024-07-26 23:02:08.097872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:67496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.563 [2024-07-26 23:02:08.097888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:18.564 [2024-07-26 23:02:08.097910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:67512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.564 [2024-07-26 23:02:08.097927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:18.564 [2024-07-26 23:02:08.097949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:67528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.564 [2024-07-26 23:02:08.097965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:18.564 [2024-07-26 23:02:08.097987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.564 [2024-07-26 23:02:08.098002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:18.564 [2024-07-26 23:02:08.098025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:67016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.564 [2024-07-26 23:02:08.098041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:18.564 [2024-07-26 23:02:08.098087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:67552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.564 [2024-07-26 23:02:08.098106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:18.564 [2024-07-26 23:02:08.098144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.564 [2024-07-26 23:02:08.098160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:18.564 [2024-07-26 23:02:08.098183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.564 [2024-07-26 23:02:08.098199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:18.564 [2024-07-26 23:02:08.098226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:67600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.564 [2024-07-26 23:02:08.098242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:18.564 [2024-07-26 23:02:08.098265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:67616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.564 [2024-07-26 23:02:08.098281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:18.564 [2024-07-26 23:02:08.098303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:67632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.564 [2024-07-26 23:02:08.098319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:18.564 [2024-07-26 23:02:08.098341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:67648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.564 [2024-07-26 23:02:08.098357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:18.564 [2024-07-26 23:02:08.098380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:67664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.564 [2024-07-26 23:02:08.098396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:18.564 [2024-07-26 23:02:08.099912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:67680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.564 [2024-07-26 23:02:08.099942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:18.564 [2024-07-26 23:02:08.099974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.564 [2024-07-26 23:02:08.099992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:18.564 [2024-07-26 23:02:08.100015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:67712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.564 [2024-07-26 23:02:08.100032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:18.564 [2024-07-26 23:02:08.100067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.564 [2024-07-26 23:02:08.100086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:18.564 [2024-07-26 23:02:08.100110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:67744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.564 [2024-07-26 23:02:08.100127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:18.564 [2024-07-26 23:02:08.100149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:67760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.564 [2024-07-26 23:02:08.100165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:18.564 [2024-07-26 23:02:08.100187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:67032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.564 [2024-07-26 23:02:08.100204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:18.564 [2024-07-26 23:02:08.100231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:67784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.564 [2024-07-26 23:02:08.100249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:18.564 [2024-07-26 23:02:08.100272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:67800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.564 [2024-07-26 23:02:08.100288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:18.564 [2024-07-26 23:02:08.100310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:67816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.564 [2024-07-26 23:02:08.100327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:18.564 [2024-07-26 23:02:08.100349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:67832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.564 [2024-07-26 23:02:08.100365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:18.564 [2024-07-26 23:02:08.100402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.564 [2024-07-26 23:02:08.100418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:18.564 [2024-07-26 23:02:08.100439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:67864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.564 [2024-07-26 23:02:08.100455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:18.564 [2024-07-26 23:02:08.100476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:67880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.564 [2024-07-26 23:02:08.100492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:18.564 [2024-07-26 23:02:08.100512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:67896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.564 [2024-07-26 23:02:08.100528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:18.564 [2024-07-26 23:02:08.100550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:67912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.564 [2024-07-26 23:02:08.100565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:18.564 [2024-07-26 23:02:08.100586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.564 [2024-07-26 23:02:08.100602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:18.564 [2024-07-26 23:02:08.100623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.564 [2024-07-26 23:02:08.100639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:18.565 [2024-07-26 23:02:08.100661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.565 [2024-07-26 23:02:08.100677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:18.565 [2024-07-26 23:02:08.100698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:67008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.565 [2024-07-26 23:02:08.100721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:18.565 [2024-07-26 23:02:08.100743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.565 [2024-07-26 23:02:08.100760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:18.565 [2024-07-26 23:02:08.100781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.565 [2024-07-26 23:02:08.100797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:18.565 [2024-07-26 23:02:08.100818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.565 [2024-07-26 23:02:08.100834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:18.565 [2024-07-26 23:02:08.100855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:67056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.565 [2024-07-26 23:02:08.100871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:18.565 [2024-07-26 23:02:08.100893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.565 [2024-07-26 23:02:08.100908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:18.565 [2024-07-26 23:02:08.100929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:67120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.565 [2024-07-26 23:02:08.100946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:18.565 [2024-07-26 23:02:08.100967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:67152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.565 [2024-07-26 23:02:08.100983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:18.565 [2024-07-26 23:02:08.101004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:67184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.565 [2024-07-26 23:02:08.101019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:18.565 [2024-07-26 23:02:08.101041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:67216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.565 [2024-07-26 23:02:08.101056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:18.565 [2024-07-26 23:02:08.101102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:67248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.565 [2024-07-26 23:02:08.101119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:18.565 [2024-07-26 23:02:08.101141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:67280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.565 [2024-07-26 23:02:08.101158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:18.565 Received shutdown signal, test time was about 32.241023 seconds 00:32:18.565 00:32:18.565 Latency(us) 00:32:18.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:18.565 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:18.565 Verification LBA range: start 0x0 length 0x4000 00:32:18.565 Nvme0n1 : 32.24 8082.24 31.57 0.00 0.00 15807.69 238.17 4076242.11 00:32:18.565 =================================================================================================================== 00:32:18.565 Total : 8082.24 31.57 0.00 0.00 15807.69 238.17 4076242.11 00:32:18.565 23:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:18.824 23:02:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:32:18.824 23:02:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:18.824 23:02:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:32:18.824 23:02:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:18.824 23:02:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:32:18.824 23:02:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:18.824 23:02:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:32:18.824 23:02:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:18.824 23:02:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:18.824 rmmod nvme_tcp 00:32:18.824 rmmod nvme_fabrics 00:32:18.824 rmmod nvme_keyring 00:32:18.824 23:02:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:18.824 23:02:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:32:18.824 23:02:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:32:18.824 23:02:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3666367 ']' 00:32:18.824 23:02:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3666367 00:32:18.824 23:02:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 3666367 ']' 00:32:18.824 23:02:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 3666367 00:32:18.824 23:02:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:32:18.824 23:02:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:18.824 23:02:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3666367 00:32:18.824 23:02:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:18.824 23:02:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:18.824 23:02:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3666367' 00:32:18.824 killing process with pid 3666367 00:32:18.824 23:02:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 3666367 00:32:18.824 23:02:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 3666367 00:32:19.085 23:02:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:19.085 23:02:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:19.085 23:02:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:19.085 23:02:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:19.085 23:02:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:19.085 23:02:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:19.085 23:02:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:19.085 23:02:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:21.650 23:02:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:21.650 00:32:21.650 real 0m40.795s 00:32:21.650 user 2m2.787s 00:32:21.650 sys 0m10.521s 00:32:21.650 23:02:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:21.650 23:02:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:21.650 ************************************ 00:32:21.650 END TEST nvmf_host_multipath_status 00:32:21.650 ************************************ 00:32:21.650 23:02:13 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:21.650 23:02:13 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:21.650 23:02:13 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:21.650 23:02:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:21.650 ************************************ 00:32:21.650 START TEST nvmf_discovery_remove_ifc 00:32:21.650 ************************************ 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:21.650 * Looking for test storage... 00:32:21.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:21.650 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:32:21.651 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:32:21.651 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:32:21.651 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:32:21.651 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:32:21.651 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:32:21.651 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:32:21.651 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:21.651 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:21.651 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:21.651 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:21.651 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:21.651 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:21.651 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:21.651 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:21.651 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:21.651 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:21.651 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:32:21.651 23:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:23.555 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:23.555 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:23.555 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:23.555 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:23.555 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:23.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:23.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:32:23.556 00:32:23.556 --- 10.0.0.2 ping statistics --- 00:32:23.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:23.556 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:23.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:23.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:32:23.556 00:32:23.556 --- 10.0.0.1 ping statistics --- 00:32:23.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:23.556 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3672839 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3672839 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 3672839 ']' 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:23.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:23.556 23:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:23.556 [2024-07-26 23:02:15.867642] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:32:23.556 [2024-07-26 23:02:15.867710] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:23.556 EAL: No free 2048 kB hugepages reported on node 1 00:32:23.556 [2024-07-26 23:02:15.935295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:23.556 [2024-07-26 23:02:16.027143] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:23.556 [2024-07-26 23:02:16.027200] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:23.556 [2024-07-26 23:02:16.027214] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:23.556 [2024-07-26 23:02:16.027225] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:23.556 [2024-07-26 23:02:16.027235] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:23.556 [2024-07-26 23:02:16.027269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:23.815 23:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:23.815 23:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:32:23.815 23:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:23.815 23:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:23.815 23:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:23.815 23:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:23.815 23:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:32:23.815 23:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.815 23:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:23.815 [2024-07-26 23:02:16.185963] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:23.815 [2024-07-26 23:02:16.194222] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:23.815 null0 00:32:23.815 [2024-07-26 23:02:16.226103] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:23.815 23:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.815 23:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3672862 00:32:23.815 23:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3672862 /tmp/host.sock 00:32:23.815 23:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 3672862 ']' 00:32:23.815 23:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:32:23.815 23:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:32:23.815 23:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:23.815 23:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:23.815 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:23.815 23:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:23.815 23:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:23.815 [2024-07-26 23:02:16.291931] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:32:23.815 [2024-07-26 23:02:16.292003] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3672862 ] 00:32:24.075 EAL: No free 2048 kB hugepages reported on node 1 00:32:24.075 [2024-07-26 23:02:16.356617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:24.075 [2024-07-26 23:02:16.447855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:24.075 23:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:24.075 23:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:32:24.075 23:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:24.075 23:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:32:24.075 23:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.075 23:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:24.075 23:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.075 23:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:32:24.075 23:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.075 23:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:24.335 23:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.335 23:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:32:24.335 23:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.335 23:02:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:25.272 [2024-07-26 23:02:17.655271] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:25.272 [2024-07-26 23:02:17.655303] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:25.272 [2024-07-26 23:02:17.655329] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:25.272 [2024-07-26 23:02:17.742638] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:25.532 [2024-07-26 23:02:17.845528] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:25.532 [2024-07-26 23:02:17.845604] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:25.533 [2024-07-26 23:02:17.845651] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:25.533 [2024-07-26 23:02:17.845680] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:25.533 [2024-07-26 23:02:17.845720] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:25.533 23:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.533 23:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:32:25.533 23:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:25.533 23:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:25.533 23:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:25.533 23:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.533 23:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:25.533 23:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:25.533 23:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:25.533 [2024-07-26 23:02:17.852378] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1440900 was disconnected and freed. delete nvme_qpair. 00:32:25.533 23:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.533 23:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:32:25.533 23:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:32:25.533 23:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:32:25.533 23:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:32:25.533 23:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:25.533 23:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:25.533 23:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:25.533 23:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.533 23:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:25.533 23:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:25.533 23:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:25.533 23:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.533 23:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:25.533 23:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:26.909 23:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:26.909 23:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:26.909 23:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.909 23:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:26.910 23:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:26.910 23:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:26.910 23:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:26.910 23:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.910 23:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:26.910 23:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:27.848 23:02:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:27.848 23:02:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:27.848 23:02:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:27.848 23:02:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.848 23:02:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:27.848 23:02:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:27.848 23:02:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:27.848 23:02:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.848 23:02:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:27.848 23:02:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:28.785 23:02:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:28.785 23:02:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:28.785 23:02:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:28.785 23:02:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.785 23:02:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:28.785 23:02:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:28.785 23:02:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:28.785 23:02:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.785 23:02:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:28.785 23:02:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:29.723 23:02:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:29.723 23:02:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:29.723 23:02:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:29.723 23:02:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.723 23:02:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:29.723 23:02:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:29.723 23:02:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:29.723 23:02:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.723 23:02:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:29.723 23:02:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:31.099 23:02:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:31.099 23:02:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:31.099 23:02:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:31.099 23:02:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.099 23:02:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:31.099 23:02:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:31.099 23:02:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:31.099 23:02:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.099 23:02:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:31.099 23:02:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:31.099 [2024-07-26 23:02:23.286626] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:32:31.099 [2024-07-26 23:02:23.286701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:31.099 [2024-07-26 23:02:23.286726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.099 [2024-07-26 23:02:23.286746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:31.099 [2024-07-26 23:02:23.286762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.099 [2024-07-26 23:02:23.286779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:31.099 [2024-07-26 23:02:23.286794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.099 [2024-07-26 23:02:23.286810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:31.099 [2024-07-26 23:02:23.286826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.099 [2024-07-26 23:02:23.286842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:31.099 [2024-07-26 23:02:23.286857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.099 [2024-07-26 23:02:23.286872] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1407990 is same with the state(5) to be set 00:32:31.099 [2024-07-26 23:02:23.296644] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1407990 (9): Bad file descriptor 00:32:31.099 [2024-07-26 23:02:23.306690] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:32.036 23:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:32.036 23:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:32.036 23:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:32.036 23:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.036 23:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:32.036 23:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:32.036 23:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:32.036 [2024-07-26 23:02:24.333093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:32.036 [2024-07-26 23:02:24.333152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1407990 with addr=10.0.0.2, port=4420 00:32:32.036 [2024-07-26 23:02:24.333180] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1407990 is same with the state(5) to be set 00:32:32.036 [2024-07-26 23:02:24.333223] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1407990 (9): Bad file descriptor 00:32:32.036 [2024-07-26 23:02:24.333680] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:32.036 [2024-07-26 23:02:24.333716] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:32.036 [2024-07-26 23:02:24.333735] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:32.036 [2024-07-26 23:02:24.333754] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:32.036 [2024-07-26 23:02:24.333783] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:32.036 [2024-07-26 23:02:24.333803] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:32.036 23:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.036 23:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:32.036 23:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:32.971 [2024-07-26 23:02:25.336302] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:32.971 [2024-07-26 23:02:25.336349] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:32.971 [2024-07-26 23:02:25.336371] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:32.971 [2024-07-26 23:02:25.336384] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:32:32.971 [2024-07-26 23:02:25.336421] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:32.971 [2024-07-26 23:02:25.336463] bdev_nvme.c:6735:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:32:32.971 [2024-07-26 23:02:25.336506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:32.971 [2024-07-26 23:02:25.336531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.971 [2024-07-26 23:02:25.336552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:32.971 [2024-07-26 23:02:25.336567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.971 [2024-07-26 23:02:25.336590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:32.971 [2024-07-26 23:02:25.336605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.971 [2024-07-26 23:02:25.336621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:32.971 [2024-07-26 23:02:25.336635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.971 [2024-07-26 23:02:25.336651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:32.971 [2024-07-26 23:02:25.336665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.971 [2024-07-26 23:02:25.336680] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:32:32.971 [2024-07-26 23:02:25.336890] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1406de0 (9): Bad file descriptor 00:32:32.971 [2024-07-26 23:02:25.337911] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:32:32.971 [2024-07-26 23:02:25.337937] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:32:32.971 23:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:32.971 23:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:32.971 23:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:32.971 23:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.971 23:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:32.971 23:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:32.971 23:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:32.971 23:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.971 23:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:32:32.971 23:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:32.971 23:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:32.971 23:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:32:32.980 23:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:32.980 23:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:32.980 23:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.980 23:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:32.980 23:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:32.980 23:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:32.980 23:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:32.980 23:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.980 23:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:32.980 23:02:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:34.363 23:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:34.363 23:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:34.363 23:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:34.363 23:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.363 23:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:34.363 23:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:34.363 23:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:34.363 23:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.363 23:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:34.363 23:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:34.931 [2024-07-26 23:02:27.349645] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:34.931 [2024-07-26 23:02:27.349701] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:34.931 [2024-07-26 23:02:27.349728] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:35.191 [2024-07-26 23:02:27.435969] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:32:35.191 23:02:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:35.191 23:02:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:35.191 23:02:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:35.191 23:02:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.191 23:02:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:35.191 23:02:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:35.191 23:02:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:35.191 23:02:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.191 [2024-07-26 23:02:27.539114] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:35.191 [2024-07-26 23:02:27.539179] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:35.191 [2024-07-26 23:02:27.539215] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:35.191 [2024-07-26 23:02:27.539241] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:32:35.191 [2024-07-26 23:02:27.539256] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:35.191 [2024-07-26 23:02:27.547942] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1421f60 was disconnected and freed. delete nvme_qpair. 00:32:35.191 23:02:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:35.191 23:02:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:36.126 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:36.126 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:36.126 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:36.126 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.127 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:36.127 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:36.127 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:36.127 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.127 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:32:36.127 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:32:36.127 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3672862 00:32:36.127 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 3672862 ']' 00:32:36.127 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 3672862 00:32:36.127 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:32:36.127 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:36.127 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3672862 00:32:36.127 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:36.127 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:36.127 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3672862' 00:32:36.127 killing process with pid 3672862 00:32:36.127 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 3672862 00:32:36.386 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 3672862 00:32:36.386 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:32:36.386 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:36.386 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:32:36.386 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:36.386 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:32:36.386 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:36.386 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:36.386 rmmod nvme_tcp 00:32:36.386 rmmod nvme_fabrics 00:32:36.386 rmmod nvme_keyring 00:32:36.644 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:36.644 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:32:36.644 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:32:36.644 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3672839 ']' 00:32:36.644 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3672839 00:32:36.644 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 3672839 ']' 00:32:36.644 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 3672839 00:32:36.644 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:32:36.644 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:36.644 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3672839 00:32:36.644 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:32:36.644 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:32:36.644 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3672839' 00:32:36.644 killing process with pid 3672839 00:32:36.644 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 3672839 00:32:36.644 23:02:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 3672839 00:32:36.902 23:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:36.902 23:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:36.902 23:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:36.902 23:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:36.902 23:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:36.902 23:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:36.902 23:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:36.902 23:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:38.843 23:02:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:38.843 00:32:38.843 real 0m17.614s 00:32:38.843 user 0m25.403s 00:32:38.843 sys 0m3.098s 00:32:38.843 23:02:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:38.843 23:02:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:38.843 ************************************ 00:32:38.843 END TEST nvmf_discovery_remove_ifc 00:32:38.843 ************************************ 00:32:38.843 23:02:31 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:38.843 23:02:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:38.843 23:02:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:38.843 23:02:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:38.843 ************************************ 00:32:38.843 START TEST nvmf_identify_kernel_target 00:32:38.843 ************************************ 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:38.843 * Looking for test storage... 00:32:38.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:38.843 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:38.844 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:38.844 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:38.844 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:38.844 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:32:38.844 23:02:31 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:41.381 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:41.381 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:32:41.381 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:41.381 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:41.381 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:41.381 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:41.381 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:41.381 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:32:41.381 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:41.381 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:32:41.381 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:32:41.381 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:32:41.381 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:32:41.381 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:32:41.381 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:32:41.381 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:41.381 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:41.381 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:41.381 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:41.381 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:41.381 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:41.381 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:41.381 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:41.381 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:41.381 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:41.381 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:41.381 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:41.381 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:41.381 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:41.381 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:41.382 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:41.382 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:41.382 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:41.382 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:41.382 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:41.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:41.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:32:41.383 00:32:41.383 --- 10.0.0.2 ping statistics --- 00:32:41.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:41.383 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:32:41.383 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:41.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:41.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:32:41.383 00:32:41.383 --- 10.0.0.1 ping statistics --- 00:32:41.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:41.383 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:32:41.383 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:41.383 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:32:41.383 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:41.383 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:41.383 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:41.383 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:41.383 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:41.383 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:41.383 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:41.383 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:32:41.383 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:32:41.383 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:32:41.383 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.383 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.383 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.383 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.383 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.383 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.383 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.383 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.383 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.383 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:32:41.383 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:41.383 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:41.383 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:41.383 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:41.383 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:41.383 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:41.383 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:32:41.383 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:41.383 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:41.383 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:41.383 23:02:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:41.954 Waiting for block devices as requested 00:32:42.215 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:42.215 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:42.215 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:42.475 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:42.475 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:42.475 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:42.735 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:42.735 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:42.735 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:42.735 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:42.994 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:42.994 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:42.994 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:42.994 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:43.253 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:43.253 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:43.253 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:43.510 23:02:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:43.510 23:02:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:43.510 23:02:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:43.510 23:02:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:32:43.510 23:02:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:43.510 23:02:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:43.510 23:02:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:43.510 23:02:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:43.510 23:02:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:43.510 No valid GPT data, bailing 00:32:43.510 23:02:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:43.510 23:02:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:32:43.510 23:02:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:32:43.510 23:02:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:43.510 23:02:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:43.510 23:02:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:43.510 23:02:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:43.510 23:02:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:43.510 23:02:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:43.510 23:02:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:32:43.510 23:02:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:43.510 23:02:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:32:43.510 23:02:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:43.511 23:02:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:32:43.511 23:02:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:32:43.511 23:02:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:32:43.511 23:02:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:43.511 23:02:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:43.511 00:32:43.511 Discovery Log Number of Records 2, Generation counter 2 00:32:43.511 =====Discovery Log Entry 0====== 00:32:43.511 trtype: tcp 00:32:43.511 adrfam: ipv4 00:32:43.511 subtype: current discovery subsystem 00:32:43.511 treq: not specified, sq flow control disable supported 00:32:43.511 portid: 1 00:32:43.511 trsvcid: 4420 00:32:43.511 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:43.511 traddr: 10.0.0.1 00:32:43.511 eflags: none 00:32:43.511 sectype: none 00:32:43.511 =====Discovery Log Entry 1====== 00:32:43.511 trtype: tcp 00:32:43.511 adrfam: ipv4 00:32:43.511 subtype: nvme subsystem 00:32:43.511 treq: not specified, sq flow control disable supported 00:32:43.511 portid: 1 00:32:43.511 trsvcid: 4420 00:32:43.511 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:43.511 traddr: 10.0.0.1 00:32:43.511 eflags: none 00:32:43.511 sectype: none 00:32:43.511 23:02:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:32:43.511 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:32:43.511 EAL: No free 2048 kB hugepages reported on node 1 00:32:43.511 ===================================================== 00:32:43.511 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:43.511 ===================================================== 00:32:43.511 Controller Capabilities/Features 00:32:43.511 ================================ 00:32:43.511 Vendor ID: 0000 00:32:43.511 Subsystem Vendor ID: 0000 00:32:43.511 Serial Number: e4fd20afb9fc87b33f1d 00:32:43.511 Model Number: Linux 00:32:43.511 Firmware Version: 6.7.0-68 00:32:43.511 Recommended Arb Burst: 0 00:32:43.511 IEEE OUI Identifier: 00 00 00 00:32:43.511 Multi-path I/O 00:32:43.511 May have multiple subsystem ports: No 00:32:43.511 May have multiple controllers: No 00:32:43.511 Associated with SR-IOV VF: No 00:32:43.511 Max Data Transfer Size: Unlimited 00:32:43.511 Max Number of Namespaces: 0 00:32:43.511 Max Number of I/O Queues: 1024 00:32:43.511 NVMe Specification Version (VS): 1.3 00:32:43.511 NVMe Specification Version (Identify): 1.3 00:32:43.511 Maximum Queue Entries: 1024 00:32:43.511 Contiguous Queues Required: No 00:32:43.511 Arbitration Mechanisms Supported 00:32:43.511 Weighted Round Robin: Not Supported 00:32:43.511 Vendor Specific: Not Supported 00:32:43.511 Reset Timeout: 7500 ms 00:32:43.511 Doorbell Stride: 4 bytes 00:32:43.511 NVM Subsystem Reset: Not Supported 00:32:43.511 Command Sets Supported 00:32:43.511 NVM Command Set: Supported 00:32:43.511 Boot Partition: Not Supported 00:32:43.511 Memory Page Size Minimum: 4096 bytes 00:32:43.511 Memory Page Size Maximum: 4096 bytes 00:32:43.511 Persistent Memory Region: Not Supported 00:32:43.511 Optional Asynchronous Events Supported 00:32:43.511 Namespace Attribute Notices: Not Supported 00:32:43.511 Firmware Activation Notices: Not Supported 00:32:43.511 ANA Change Notices: Not Supported 00:32:43.511 PLE Aggregate Log Change Notices: Not Supported 00:32:43.511 LBA Status Info Alert Notices: Not Supported 00:32:43.511 EGE Aggregate Log Change Notices: Not Supported 00:32:43.511 Normal NVM Subsystem Shutdown event: Not Supported 00:32:43.511 Zone Descriptor Change Notices: Not Supported 00:32:43.511 Discovery Log Change Notices: Supported 00:32:43.511 Controller Attributes 00:32:43.511 128-bit Host Identifier: Not Supported 00:32:43.511 Non-Operational Permissive Mode: Not Supported 00:32:43.511 NVM Sets: Not Supported 00:32:43.511 Read Recovery Levels: Not Supported 00:32:43.511 Endurance Groups: Not Supported 00:32:43.511 Predictable Latency Mode: Not Supported 00:32:43.511 Traffic Based Keep ALive: Not Supported 00:32:43.511 Namespace Granularity: Not Supported 00:32:43.511 SQ Associations: Not Supported 00:32:43.511 UUID List: Not Supported 00:32:43.511 Multi-Domain Subsystem: Not Supported 00:32:43.511 Fixed Capacity Management: Not Supported 00:32:43.511 Variable Capacity Management: Not Supported 00:32:43.511 Delete Endurance Group: Not Supported 00:32:43.511 Delete NVM Set: Not Supported 00:32:43.511 Extended LBA Formats Supported: Not Supported 00:32:43.511 Flexible Data Placement Supported: Not Supported 00:32:43.511 00:32:43.511 Controller Memory Buffer Support 00:32:43.511 ================================ 00:32:43.511 Supported: No 00:32:43.511 00:32:43.511 Persistent Memory Region Support 00:32:43.511 ================================ 00:32:43.511 Supported: No 00:32:43.511 00:32:43.511 Admin Command Set Attributes 00:32:43.511 ============================ 00:32:43.511 Security Send/Receive: Not Supported 00:32:43.511 Format NVM: Not Supported 00:32:43.511 Firmware Activate/Download: Not Supported 00:32:43.511 Namespace Management: Not Supported 00:32:43.511 Device Self-Test: Not Supported 00:32:43.511 Directives: Not Supported 00:32:43.511 NVMe-MI: Not Supported 00:32:43.511 Virtualization Management: Not Supported 00:32:43.511 Doorbell Buffer Config: Not Supported 00:32:43.511 Get LBA Status Capability: Not Supported 00:32:43.511 Command & Feature Lockdown Capability: Not Supported 00:32:43.511 Abort Command Limit: 1 00:32:43.511 Async Event Request Limit: 1 00:32:43.511 Number of Firmware Slots: N/A 00:32:43.511 Firmware Slot 1 Read-Only: N/A 00:32:43.770 Firmware Activation Without Reset: N/A 00:32:43.770 Multiple Update Detection Support: N/A 00:32:43.770 Firmware Update Granularity: No Information Provided 00:32:43.770 Per-Namespace SMART Log: No 00:32:43.770 Asymmetric Namespace Access Log Page: Not Supported 00:32:43.770 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:43.770 Command Effects Log Page: Not Supported 00:32:43.770 Get Log Page Extended Data: Supported 00:32:43.770 Telemetry Log Pages: Not Supported 00:32:43.770 Persistent Event Log Pages: Not Supported 00:32:43.770 Supported Log Pages Log Page: May Support 00:32:43.770 Commands Supported & Effects Log Page: Not Supported 00:32:43.770 Feature Identifiers & Effects Log Page:May Support 00:32:43.770 NVMe-MI Commands & Effects Log Page: May Support 00:32:43.770 Data Area 4 for Telemetry Log: Not Supported 00:32:43.770 Error Log Page Entries Supported: 1 00:32:43.770 Keep Alive: Not Supported 00:32:43.770 00:32:43.770 NVM Command Set Attributes 00:32:43.770 ========================== 00:32:43.770 Submission Queue Entry Size 00:32:43.770 Max: 1 00:32:43.770 Min: 1 00:32:43.770 Completion Queue Entry Size 00:32:43.770 Max: 1 00:32:43.770 Min: 1 00:32:43.770 Number of Namespaces: 0 00:32:43.770 Compare Command: Not Supported 00:32:43.770 Write Uncorrectable Command: Not Supported 00:32:43.770 Dataset Management Command: Not Supported 00:32:43.770 Write Zeroes Command: Not Supported 00:32:43.770 Set Features Save Field: Not Supported 00:32:43.770 Reservations: Not Supported 00:32:43.770 Timestamp: Not Supported 00:32:43.770 Copy: Not Supported 00:32:43.770 Volatile Write Cache: Not Present 00:32:43.770 Atomic Write Unit (Normal): 1 00:32:43.770 Atomic Write Unit (PFail): 1 00:32:43.770 Atomic Compare & Write Unit: 1 00:32:43.770 Fused Compare & Write: Not Supported 00:32:43.770 Scatter-Gather List 00:32:43.770 SGL Command Set: Supported 00:32:43.770 SGL Keyed: Not Supported 00:32:43.770 SGL Bit Bucket Descriptor: Not Supported 00:32:43.770 SGL Metadata Pointer: Not Supported 00:32:43.770 Oversized SGL: Not Supported 00:32:43.770 SGL Metadata Address: Not Supported 00:32:43.770 SGL Offset: Supported 00:32:43.770 Transport SGL Data Block: Not Supported 00:32:43.770 Replay Protected Memory Block: Not Supported 00:32:43.770 00:32:43.770 Firmware Slot Information 00:32:43.770 ========================= 00:32:43.770 Active slot: 0 00:32:43.770 00:32:43.770 00:32:43.770 Error Log 00:32:43.770 ========= 00:32:43.770 00:32:43.770 Active Namespaces 00:32:43.770 ================= 00:32:43.770 Discovery Log Page 00:32:43.770 ================== 00:32:43.770 Generation Counter: 2 00:32:43.770 Number of Records: 2 00:32:43.770 Record Format: 0 00:32:43.770 00:32:43.770 Discovery Log Entry 0 00:32:43.770 ---------------------- 00:32:43.770 Transport Type: 3 (TCP) 00:32:43.770 Address Family: 1 (IPv4) 00:32:43.770 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:43.770 Entry Flags: 00:32:43.770 Duplicate Returned Information: 0 00:32:43.770 Explicit Persistent Connection Support for Discovery: 0 00:32:43.770 Transport Requirements: 00:32:43.770 Secure Channel: Not Specified 00:32:43.770 Port ID: 1 (0x0001) 00:32:43.770 Controller ID: 65535 (0xffff) 00:32:43.770 Admin Max SQ Size: 32 00:32:43.770 Transport Service Identifier: 4420 00:32:43.770 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:43.770 Transport Address: 10.0.0.1 00:32:43.770 Discovery Log Entry 1 00:32:43.770 ---------------------- 00:32:43.771 Transport Type: 3 (TCP) 00:32:43.771 Address Family: 1 (IPv4) 00:32:43.771 Subsystem Type: 2 (NVM Subsystem) 00:32:43.771 Entry Flags: 00:32:43.771 Duplicate Returned Information: 0 00:32:43.771 Explicit Persistent Connection Support for Discovery: 0 00:32:43.771 Transport Requirements: 00:32:43.771 Secure Channel: Not Specified 00:32:43.771 Port ID: 1 (0x0001) 00:32:43.771 Controller ID: 65535 (0xffff) 00:32:43.771 Admin Max SQ Size: 32 00:32:43.771 Transport Service Identifier: 4420 00:32:43.771 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:32:43.771 Transport Address: 10.0.0.1 00:32:43.771 23:02:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:43.771 EAL: No free 2048 kB hugepages reported on node 1 00:32:43.771 get_feature(0x01) failed 00:32:43.771 get_feature(0x02) failed 00:32:43.771 get_feature(0x04) failed 00:32:43.771 ===================================================== 00:32:43.771 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:43.771 ===================================================== 00:32:43.771 Controller Capabilities/Features 00:32:43.771 ================================ 00:32:43.771 Vendor ID: 0000 00:32:43.771 Subsystem Vendor ID: 0000 00:32:43.771 Serial Number: eba61af4ec23c57aabaf 00:32:43.771 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:32:43.771 Firmware Version: 6.7.0-68 00:32:43.771 Recommended Arb Burst: 6 00:32:43.771 IEEE OUI Identifier: 00 00 00 00:32:43.771 Multi-path I/O 00:32:43.771 May have multiple subsystem ports: Yes 00:32:43.771 May have multiple controllers: Yes 00:32:43.771 Associated with SR-IOV VF: No 00:32:43.771 Max Data Transfer Size: Unlimited 00:32:43.771 Max Number of Namespaces: 1024 00:32:43.771 Max Number of I/O Queues: 128 00:32:43.771 NVMe Specification Version (VS): 1.3 00:32:43.771 NVMe Specification Version (Identify): 1.3 00:32:43.771 Maximum Queue Entries: 1024 00:32:43.771 Contiguous Queues Required: No 00:32:43.771 Arbitration Mechanisms Supported 00:32:43.771 Weighted Round Robin: Not Supported 00:32:43.771 Vendor Specific: Not Supported 00:32:43.771 Reset Timeout: 7500 ms 00:32:43.771 Doorbell Stride: 4 bytes 00:32:43.771 NVM Subsystem Reset: Not Supported 00:32:43.771 Command Sets Supported 00:32:43.771 NVM Command Set: Supported 00:32:43.771 Boot Partition: Not Supported 00:32:43.771 Memory Page Size Minimum: 4096 bytes 00:32:43.771 Memory Page Size Maximum: 4096 bytes 00:32:43.771 Persistent Memory Region: Not Supported 00:32:43.771 Optional Asynchronous Events Supported 00:32:43.771 Namespace Attribute Notices: Supported 00:32:43.771 Firmware Activation Notices: Not Supported 00:32:43.771 ANA Change Notices: Supported 00:32:43.771 PLE Aggregate Log Change Notices: Not Supported 00:32:43.771 LBA Status Info Alert Notices: Not Supported 00:32:43.771 EGE Aggregate Log Change Notices: Not Supported 00:32:43.771 Normal NVM Subsystem Shutdown event: Not Supported 00:32:43.771 Zone Descriptor Change Notices: Not Supported 00:32:43.771 Discovery Log Change Notices: Not Supported 00:32:43.771 Controller Attributes 00:32:43.771 128-bit Host Identifier: Supported 00:32:43.771 Non-Operational Permissive Mode: Not Supported 00:32:43.771 NVM Sets: Not Supported 00:32:43.771 Read Recovery Levels: Not Supported 00:32:43.771 Endurance Groups: Not Supported 00:32:43.771 Predictable Latency Mode: Not Supported 00:32:43.771 Traffic Based Keep ALive: Supported 00:32:43.771 Namespace Granularity: Not Supported 00:32:43.771 SQ Associations: Not Supported 00:32:43.771 UUID List: Not Supported 00:32:43.771 Multi-Domain Subsystem: Not Supported 00:32:43.771 Fixed Capacity Management: Not Supported 00:32:43.771 Variable Capacity Management: Not Supported 00:32:43.771 Delete Endurance Group: Not Supported 00:32:43.771 Delete NVM Set: Not Supported 00:32:43.771 Extended LBA Formats Supported: Not Supported 00:32:43.771 Flexible Data Placement Supported: Not Supported 00:32:43.771 00:32:43.771 Controller Memory Buffer Support 00:32:43.771 ================================ 00:32:43.771 Supported: No 00:32:43.771 00:32:43.771 Persistent Memory Region Support 00:32:43.771 ================================ 00:32:43.771 Supported: No 00:32:43.771 00:32:43.771 Admin Command Set Attributes 00:32:43.771 ============================ 00:32:43.771 Security Send/Receive: Not Supported 00:32:43.771 Format NVM: Not Supported 00:32:43.771 Firmware Activate/Download: Not Supported 00:32:43.771 Namespace Management: Not Supported 00:32:43.771 Device Self-Test: Not Supported 00:32:43.771 Directives: Not Supported 00:32:43.771 NVMe-MI: Not Supported 00:32:43.771 Virtualization Management: Not Supported 00:32:43.771 Doorbell Buffer Config: Not Supported 00:32:43.771 Get LBA Status Capability: Not Supported 00:32:43.771 Command & Feature Lockdown Capability: Not Supported 00:32:43.771 Abort Command Limit: 4 00:32:43.771 Async Event Request Limit: 4 00:32:43.771 Number of Firmware Slots: N/A 00:32:43.771 Firmware Slot 1 Read-Only: N/A 00:32:43.771 Firmware Activation Without Reset: N/A 00:32:43.771 Multiple Update Detection Support: N/A 00:32:43.771 Firmware Update Granularity: No Information Provided 00:32:43.771 Per-Namespace SMART Log: Yes 00:32:43.771 Asymmetric Namespace Access Log Page: Supported 00:32:43.771 ANA Transition Time : 10 sec 00:32:43.771 00:32:43.771 Asymmetric Namespace Access Capabilities 00:32:43.771 ANA Optimized State : Supported 00:32:43.771 ANA Non-Optimized State : Supported 00:32:43.771 ANA Inaccessible State : Supported 00:32:43.771 ANA Persistent Loss State : Supported 00:32:43.771 ANA Change State : Supported 00:32:43.771 ANAGRPID is not changed : No 00:32:43.771 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:32:43.771 00:32:43.771 ANA Group Identifier Maximum : 128 00:32:43.771 Number of ANA Group Identifiers : 128 00:32:43.771 Max Number of Allowed Namespaces : 1024 00:32:43.771 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:32:43.771 Command Effects Log Page: Supported 00:32:43.771 Get Log Page Extended Data: Supported 00:32:43.771 Telemetry Log Pages: Not Supported 00:32:43.771 Persistent Event Log Pages: Not Supported 00:32:43.771 Supported Log Pages Log Page: May Support 00:32:43.771 Commands Supported & Effects Log Page: Not Supported 00:32:43.771 Feature Identifiers & Effects Log Page:May Support 00:32:43.771 NVMe-MI Commands & Effects Log Page: May Support 00:32:43.771 Data Area 4 for Telemetry Log: Not Supported 00:32:43.771 Error Log Page Entries Supported: 128 00:32:43.771 Keep Alive: Supported 00:32:43.771 Keep Alive Granularity: 1000 ms 00:32:43.771 00:32:43.771 NVM Command Set Attributes 00:32:43.771 ========================== 00:32:43.771 Submission Queue Entry Size 00:32:43.771 Max: 64 00:32:43.771 Min: 64 00:32:43.771 Completion Queue Entry Size 00:32:43.771 Max: 16 00:32:43.771 Min: 16 00:32:43.771 Number of Namespaces: 1024 00:32:43.771 Compare Command: Not Supported 00:32:43.771 Write Uncorrectable Command: Not Supported 00:32:43.771 Dataset Management Command: Supported 00:32:43.771 Write Zeroes Command: Supported 00:32:43.771 Set Features Save Field: Not Supported 00:32:43.771 Reservations: Not Supported 00:32:43.771 Timestamp: Not Supported 00:32:43.772 Copy: Not Supported 00:32:43.772 Volatile Write Cache: Present 00:32:43.772 Atomic Write Unit (Normal): 1 00:32:43.772 Atomic Write Unit (PFail): 1 00:32:43.772 Atomic Compare & Write Unit: 1 00:32:43.772 Fused Compare & Write: Not Supported 00:32:43.772 Scatter-Gather List 00:32:43.772 SGL Command Set: Supported 00:32:43.772 SGL Keyed: Not Supported 00:32:43.772 SGL Bit Bucket Descriptor: Not Supported 00:32:43.772 SGL Metadata Pointer: Not Supported 00:32:43.772 Oversized SGL: Not Supported 00:32:43.772 SGL Metadata Address: Not Supported 00:32:43.772 SGL Offset: Supported 00:32:43.772 Transport SGL Data Block: Not Supported 00:32:43.772 Replay Protected Memory Block: Not Supported 00:32:43.772 00:32:43.772 Firmware Slot Information 00:32:43.772 ========================= 00:32:43.772 Active slot: 0 00:32:43.772 00:32:43.772 Asymmetric Namespace Access 00:32:43.772 =========================== 00:32:43.772 Change Count : 0 00:32:43.772 Number of ANA Group Descriptors : 1 00:32:43.772 ANA Group Descriptor : 0 00:32:43.772 ANA Group ID : 1 00:32:43.772 Number of NSID Values : 1 00:32:43.772 Change Count : 0 00:32:43.772 ANA State : 1 00:32:43.772 Namespace Identifier : 1 00:32:43.772 00:32:43.772 Commands Supported and Effects 00:32:43.772 ============================== 00:32:43.772 Admin Commands 00:32:43.772 -------------- 00:32:43.772 Get Log Page (02h): Supported 00:32:43.772 Identify (06h): Supported 00:32:43.772 Abort (08h): Supported 00:32:43.772 Set Features (09h): Supported 00:32:43.772 Get Features (0Ah): Supported 00:32:43.772 Asynchronous Event Request (0Ch): Supported 00:32:43.772 Keep Alive (18h): Supported 00:32:43.772 I/O Commands 00:32:43.772 ------------ 00:32:43.772 Flush (00h): Supported 00:32:43.772 Write (01h): Supported LBA-Change 00:32:43.772 Read (02h): Supported 00:32:43.772 Write Zeroes (08h): Supported LBA-Change 00:32:43.772 Dataset Management (09h): Supported 00:32:43.772 00:32:43.772 Error Log 00:32:43.772 ========= 00:32:43.772 Entry: 0 00:32:43.772 Error Count: 0x3 00:32:43.772 Submission Queue Id: 0x0 00:32:43.772 Command Id: 0x5 00:32:43.772 Phase Bit: 0 00:32:43.772 Status Code: 0x2 00:32:43.772 Status Code Type: 0x0 00:32:43.772 Do Not Retry: 1 00:32:43.772 Error Location: 0x28 00:32:43.772 LBA: 0x0 00:32:43.772 Namespace: 0x0 00:32:43.772 Vendor Log Page: 0x0 00:32:43.772 ----------- 00:32:43.772 Entry: 1 00:32:43.772 Error Count: 0x2 00:32:43.772 Submission Queue Id: 0x0 00:32:43.772 Command Id: 0x5 00:32:43.772 Phase Bit: 0 00:32:43.772 Status Code: 0x2 00:32:43.772 Status Code Type: 0x0 00:32:43.772 Do Not Retry: 1 00:32:43.772 Error Location: 0x28 00:32:43.772 LBA: 0x0 00:32:43.772 Namespace: 0x0 00:32:43.772 Vendor Log Page: 0x0 00:32:43.772 ----------- 00:32:43.772 Entry: 2 00:32:43.772 Error Count: 0x1 00:32:43.772 Submission Queue Id: 0x0 00:32:43.772 Command Id: 0x4 00:32:43.772 Phase Bit: 0 00:32:43.772 Status Code: 0x2 00:32:43.772 Status Code Type: 0x0 00:32:43.772 Do Not Retry: 1 00:32:43.772 Error Location: 0x28 00:32:43.772 LBA: 0x0 00:32:43.772 Namespace: 0x0 00:32:43.772 Vendor Log Page: 0x0 00:32:43.772 00:32:43.772 Number of Queues 00:32:43.772 ================ 00:32:43.772 Number of I/O Submission Queues: 128 00:32:43.772 Number of I/O Completion Queues: 128 00:32:43.772 00:32:43.772 ZNS Specific Controller Data 00:32:43.772 ============================ 00:32:43.772 Zone Append Size Limit: 0 00:32:43.772 00:32:43.772 00:32:43.772 Active Namespaces 00:32:43.772 ================= 00:32:43.772 get_feature(0x05) failed 00:32:43.772 Namespace ID:1 00:32:43.772 Command Set Identifier: NVM (00h) 00:32:43.772 Deallocate: Supported 00:32:43.772 Deallocated/Unwritten Error: Not Supported 00:32:43.772 Deallocated Read Value: Unknown 00:32:43.772 Deallocate in Write Zeroes: Not Supported 00:32:43.772 Deallocated Guard Field: 0xFFFF 00:32:43.772 Flush: Supported 00:32:43.772 Reservation: Not Supported 00:32:43.772 Namespace Sharing Capabilities: Multiple Controllers 00:32:43.772 Size (in LBAs): 1953525168 (931GiB) 00:32:43.772 Capacity (in LBAs): 1953525168 (931GiB) 00:32:43.772 Utilization (in LBAs): 1953525168 (931GiB) 00:32:43.772 UUID: 69e30a56-3d60-4edb-ba12-6448e40950a9 00:32:43.772 Thin Provisioning: Not Supported 00:32:43.772 Per-NS Atomic Units: Yes 00:32:43.772 Atomic Boundary Size (Normal): 0 00:32:43.772 Atomic Boundary Size (PFail): 0 00:32:43.772 Atomic Boundary Offset: 0 00:32:43.772 NGUID/EUI64 Never Reused: No 00:32:43.772 ANA group ID: 1 00:32:43.772 Namespace Write Protected: No 00:32:43.772 Number of LBA Formats: 1 00:32:43.772 Current LBA Format: LBA Format #00 00:32:43.772 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:43.772 00:32:43.772 23:02:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:32:43.772 23:02:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:43.772 23:02:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:32:43.772 23:02:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:43.772 23:02:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:32:43.772 23:02:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:43.772 23:02:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:43.772 rmmod nvme_tcp 00:32:43.772 rmmod nvme_fabrics 00:32:43.772 23:02:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:43.772 23:02:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:32:43.772 23:02:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:32:43.772 23:02:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:32:43.772 23:02:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:43.772 23:02:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:43.772 23:02:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:43.772 23:02:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:43.772 23:02:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:43.772 23:02:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:43.772 23:02:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:43.772 23:02:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:46.305 23:02:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:46.305 23:02:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:32:46.305 23:02:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:46.305 23:02:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:32:46.305 23:02:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:46.305 23:02:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:46.305 23:02:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:46.305 23:02:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:46.305 23:02:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:46.305 23:02:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:46.305 23:02:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:46.871 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:46.871 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:46.871 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:47.130 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:47.130 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:47.130 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:47.130 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:47.130 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:47.130 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:47.130 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:47.130 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:47.130 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:47.130 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:47.130 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:47.130 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:47.130 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:48.067 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:48.067 00:32:48.067 real 0m9.267s 00:32:48.067 user 0m1.897s 00:32:48.067 sys 0m3.327s 00:32:48.067 23:02:40 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:48.067 23:02:40 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:48.067 ************************************ 00:32:48.067 END TEST nvmf_identify_kernel_target 00:32:48.067 ************************************ 00:32:48.067 23:02:40 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:48.067 23:02:40 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:48.067 23:02:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:48.067 23:02:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:48.067 ************************************ 00:32:48.067 START TEST nvmf_auth_host 00:32:48.067 ************************************ 00:32:48.067 23:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:48.326 * Looking for test storage... 00:32:48.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:48.326 23:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.232 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:50.232 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:50.232 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:50.232 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:50.232 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:50.232 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:50.232 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:50.232 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:50.232 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:50.232 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:32:50.232 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:50.232 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:32:50.232 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:50.232 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:32:50.232 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:50.232 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:50.232 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:50.232 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:50.233 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:50.233 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:50.233 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:50.233 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:50.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:50.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:32:50.233 00:32:50.233 --- 10.0.0.2 ping statistics --- 00:32:50.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:50.233 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:50.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:50.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:32:50.233 00:32:50.233 --- 10.0.0.1 ping statistics --- 00:32:50.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:50.233 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3679838 00:32:50.233 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:50.234 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3679838 00:32:50.234 23:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 3679838 ']' 00:32:50.234 23:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:50.234 23:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:50.234 23:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:50.234 23:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:50.234 23:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.491 23:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:50.491 23:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:32:50.491 23:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:50.491 23:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:50.491 23:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.749 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:50.749 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:50.749 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:50.749 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:50.749 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:50.749 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:50.749 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6b7dc5f9b0b8425392017718bb94481e 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.yob 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6b7dc5f9b0b8425392017718bb94481e 0 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6b7dc5f9b0b8425392017718bb94481e 0 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6b7dc5f9b0b8425392017718bb94481e 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.yob 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.yob 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.yob 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4ea44f363b4f88e45e273cf4e714b00e46a7a6a73ce581eabac04062063d32c8 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.WBP 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4ea44f363b4f88e45e273cf4e714b00e46a7a6a73ce581eabac04062063d32c8 3 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4ea44f363b4f88e45e273cf4e714b00e46a7a6a73ce581eabac04062063d32c8 3 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4ea44f363b4f88e45e273cf4e714b00e46a7a6a73ce581eabac04062063d32c8 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.WBP 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.WBP 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.WBP 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9451a386e024b8c669eb8e6464f2fe164cbeab1e6e8d4804 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.zRV 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9451a386e024b8c669eb8e6464f2fe164cbeab1e6e8d4804 0 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9451a386e024b8c669eb8e6464f2fe164cbeab1e6e8d4804 0 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9451a386e024b8c669eb8e6464f2fe164cbeab1e6e8d4804 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.zRV 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.zRV 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.zRV 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=10a684c6ceb5efc76f920d8465809ea740b499e531739b4e 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.XiI 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 10a684c6ceb5efc76f920d8465809ea740b499e531739b4e 2 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 10a684c6ceb5efc76f920d8465809ea740b499e531739b4e 2 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=10a684c6ceb5efc76f920d8465809ea740b499e531739b4e 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.XiI 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.XiI 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.XiI 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c3e1c16fd2dc2331e305b94051d35022 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.bPS 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c3e1c16fd2dc2331e305b94051d35022 1 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c3e1c16fd2dc2331e305b94051d35022 1 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c3e1c16fd2dc2331e305b94051d35022 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:50.750 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.bPS 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.bPS 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.bPS 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5e8d0c56c027ca211727eb065b75345a 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Pzz 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5e8d0c56c027ca211727eb065b75345a 1 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5e8d0c56c027ca211727eb065b75345a 1 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5e8d0c56c027ca211727eb065b75345a 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Pzz 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Pzz 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Pzz 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d239baa2ad6ca1fb31596fbb99d64ff7965f953ca3a80c37 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.J0V 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d239baa2ad6ca1fb31596fbb99d64ff7965f953ca3a80c37 2 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d239baa2ad6ca1fb31596fbb99d64ff7965f953ca3a80c37 2 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d239baa2ad6ca1fb31596fbb99d64ff7965f953ca3a80c37 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.J0V 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.J0V 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.J0V 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:51.009 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0fcf80a1f09c0e28088d609463437444 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.S0K 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0fcf80a1f09c0e28088d609463437444 0 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0fcf80a1f09c0e28088d609463437444 0 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0fcf80a1f09c0e28088d609463437444 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.S0K 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.S0K 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.S0K 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=dd34e5a4f68ca2d4cd0483f874c16569ef61c1dbde8af2b9a93870e00a73dd61 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.k1V 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key dd34e5a4f68ca2d4cd0483f874c16569ef61c1dbde8af2b9a93870e00a73dd61 3 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 dd34e5a4f68ca2d4cd0483f874c16569ef61c1dbde8af2b9a93870e00a73dd61 3 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=dd34e5a4f68ca2d4cd0483f874c16569ef61c1dbde8af2b9a93870e00a73dd61 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.k1V 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.k1V 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.k1V 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3679838 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 3679838 ']' 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:51.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:51.010 23:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.yob 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.WBP ]] 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.WBP 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.zRV 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.XiI ]] 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.XiI 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.bPS 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Pzz ]] 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Pzz 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.J0V 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.S0K ]] 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.S0K 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.k1V 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.269 23:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.527 23:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.527 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:51.527 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:51.527 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:51.527 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.527 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.527 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.527 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.527 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.527 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.527 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.527 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.527 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.527 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.527 23:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:51.527 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:51.527 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:51.527 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:51.527 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:51.528 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:51.528 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:32:51.528 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:51.528 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:51.528 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:51.528 23:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:52.460 Waiting for block devices as requested 00:32:52.460 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:52.460 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:52.717 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:52.717 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:52.717 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:52.976 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:52.976 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:52.976 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:52.976 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:53.236 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:53.236 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:53.236 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:53.496 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:53.496 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:53.496 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:53.496 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:53.756 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:54.015 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:54.015 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:54.015 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:54.015 23:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:32:54.015 23:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:54.015 23:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:54.015 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:54.015 23:02:46 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:54.015 23:02:46 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:54.015 No valid GPT data, bailing 00:32:54.015 23:02:46 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:54.015 23:02:46 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:32:54.015 23:02:46 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:32:54.015 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:54.015 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:54.015 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:54.015 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:54.015 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:54.015 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:54.015 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:32:54.015 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:54.015 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:32:54.015 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:54.015 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:32:54.015 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:32:54.015 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:32:54.015 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:54.274 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:54.274 00:32:54.274 Discovery Log Number of Records 2, Generation counter 2 00:32:54.274 =====Discovery Log Entry 0====== 00:32:54.274 trtype: tcp 00:32:54.274 adrfam: ipv4 00:32:54.274 subtype: current discovery subsystem 00:32:54.274 treq: not specified, sq flow control disable supported 00:32:54.274 portid: 1 00:32:54.274 trsvcid: 4420 00:32:54.274 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:54.274 traddr: 10.0.0.1 00:32:54.274 eflags: none 00:32:54.274 sectype: none 00:32:54.274 =====Discovery Log Entry 1====== 00:32:54.274 trtype: tcp 00:32:54.274 adrfam: ipv4 00:32:54.274 subtype: nvme subsystem 00:32:54.274 treq: not specified, sq flow control disable supported 00:32:54.274 portid: 1 00:32:54.274 trsvcid: 4420 00:32:54.274 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:54.274 traddr: 10.0.0.1 00:32:54.274 eflags: none 00:32:54.274 sectype: none 00:32:54.274 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:54.274 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:54.274 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:54.274 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:54.274 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.274 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:54.274 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:54.274 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:54.274 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ1MWEzODZlMDI0YjhjNjY5ZWI4ZTY0NjRmMmZlMTY0Y2JlYWIxZTZlOGQ0ODA0xMaSjQ==: 00:32:54.274 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: 00:32:54.274 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:54.274 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:54.274 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ1MWEzODZlMDI0YjhjNjY5ZWI4ZTY0NjRmMmZlMTY0Y2JlYWIxZTZlOGQ0ODA0xMaSjQ==: 00:32:54.274 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: ]] 00:32:54.274 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: 00:32:54.275 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:54.275 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:54.275 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:54.275 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:54.275 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:54.275 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.275 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:54.275 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:54.275 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:54.275 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.275 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:54.275 23:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.275 23:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.275 23:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.275 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.275 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.275 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.275 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.275 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.275 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.275 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.275 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.275 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.275 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.275 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.275 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:54.275 23:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.275 23:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.573 nvme0n1 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmI3ZGM1ZjliMGI4NDI1MzkyMDE3NzE4YmI5NDQ4MWVgVxWS: 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmI3ZGM1ZjliMGI4NDI1MzkyMDE3NzE4YmI5NDQ4MWVgVxWS: 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: ]] 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.573 23:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.573 nvme0n1 00:32:54.573 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.573 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.573 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.573 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.573 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.573 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.573 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.573 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.573 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.573 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ1MWEzODZlMDI0YjhjNjY5ZWI4ZTY0NjRmMmZlMTY0Y2JlYWIxZTZlOGQ0ODA0xMaSjQ==: 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ1MWEzODZlMDI0YjhjNjY5ZWI4ZTY0NjRmMmZlMTY0Y2JlYWIxZTZlOGQ0ODA0xMaSjQ==: 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: ]] 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.833 nvme0n1 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:54.833 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.834 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:54.834 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:54.834 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:54.834 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzNlMWMxNmZkMmRjMjMzMWUzMDViOTQwNTFkMzUwMjIr1JuS: 00:32:54.834 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: 00:32:54.834 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:54.834 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:54.834 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzNlMWMxNmZkMmRjMjMzMWUzMDViOTQwNTFkMzUwMjIr1JuS: 00:32:54.834 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: ]] 00:32:54.834 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: 00:32:54.834 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:54.834 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.834 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:54.834 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:54.834 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:54.834 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.834 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:54.834 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.834 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.834 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.834 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.834 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.834 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.834 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.834 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.834 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.834 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.834 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.834 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.834 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.834 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.834 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:54.834 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.834 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.094 nvme0n1 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDIzOWJhYTJhZDZjYTFmYjMxNTk2ZmJiOTlkNjRmZjc5NjVmOTUzY2EzYTgwYzM3ue9IKA==: 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDIzOWJhYTJhZDZjYTFmYjMxNTk2ZmJiOTlkNjRmZjc5NjVmOTUzY2EzYTgwYzM3ue9IKA==: 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: ]] 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.094 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.354 nvme0n1 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGQzNGU1YTRmNjhjYTJkNGNkMDQ4M2Y4NzRjMTY1NjllZjYxYzFkYmRlOGFmMmI5YTkzODcwZTAwYTczZGQ2MWB9tso=: 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGQzNGU1YTRmNjhjYTJkNGNkMDQ4M2Y4NzRjMTY1NjllZjYxYzFkYmRlOGFmMmI5YTkzODcwZTAwYTczZGQ2MWB9tso=: 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.355 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.614 nvme0n1 00:32:55.614 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.614 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.614 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.614 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.614 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.614 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.614 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.614 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.614 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.614 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.614 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.614 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:55.614 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.614 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:55.614 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.614 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:55.614 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:55.614 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:55.614 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmI3ZGM1ZjliMGI4NDI1MzkyMDE3NzE4YmI5NDQ4MWVgVxWS: 00:32:55.614 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: 00:32:55.614 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:55.614 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:55.614 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmI3ZGM1ZjliMGI4NDI1MzkyMDE3NzE4YmI5NDQ4MWVgVxWS: 00:32:55.614 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: ]] 00:32:55.614 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: 00:32:55.614 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:55.614 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.614 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:55.614 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:55.614 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:55.615 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.615 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:55.615 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.615 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.615 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.615 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.615 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.615 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.615 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.615 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.615 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.615 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.615 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.615 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.615 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.615 23:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.615 23:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:55.615 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.615 23:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.875 nvme0n1 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ1MWEzODZlMDI0YjhjNjY5ZWI4ZTY0NjRmMmZlMTY0Y2JlYWIxZTZlOGQ0ODA0xMaSjQ==: 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ1MWEzODZlMDI0YjhjNjY5ZWI4ZTY0NjRmMmZlMTY0Y2JlYWIxZTZlOGQ0ODA0xMaSjQ==: 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: ]] 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.875 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.135 nvme0n1 00:32:56.135 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.135 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.135 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.135 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.135 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.135 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.135 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.135 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.135 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.135 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.135 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.135 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.135 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:56.135 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.135 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:56.135 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:56.135 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:56.135 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzNlMWMxNmZkMmRjMjMzMWUzMDViOTQwNTFkMzUwMjIr1JuS: 00:32:56.135 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: 00:32:56.135 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:56.135 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:56.135 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzNlMWMxNmZkMmRjMjMzMWUzMDViOTQwNTFkMzUwMjIr1JuS: 00:32:56.135 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: ]] 00:32:56.135 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: 00:32:56.135 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:56.135 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.135 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:56.135 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:56.135 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:56.135 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.135 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:56.135 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.135 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.135 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.136 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.136 23:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.136 23:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.136 23:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.136 23:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.136 23:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.136 23:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.136 23:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.136 23:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.136 23:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.136 23:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.136 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:56.136 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.136 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.396 nvme0n1 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDIzOWJhYTJhZDZjYTFmYjMxNTk2ZmJiOTlkNjRmZjc5NjVmOTUzY2EzYTgwYzM3ue9IKA==: 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDIzOWJhYTJhZDZjYTFmYjMxNTk2ZmJiOTlkNjRmZjc5NjVmOTUzY2EzYTgwYzM3ue9IKA==: 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: ]] 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.396 23:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.397 23:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.397 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:56.397 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.397 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.656 nvme0n1 00:32:56.656 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.656 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.656 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.656 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.656 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.656 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.656 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.656 23:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.656 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.656 23:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.656 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.656 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.656 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:56.656 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.656 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:56.656 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:56.656 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:56.656 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGQzNGU1YTRmNjhjYTJkNGNkMDQ4M2Y4NzRjMTY1NjllZjYxYzFkYmRlOGFmMmI5YTkzODcwZTAwYTczZGQ2MWB9tso=: 00:32:56.656 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:56.656 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:56.656 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:56.656 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGQzNGU1YTRmNjhjYTJkNGNkMDQ4M2Y4NzRjMTY1NjllZjYxYzFkYmRlOGFmMmI5YTkzODcwZTAwYTczZGQ2MWB9tso=: 00:32:56.656 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:56.656 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:56.656 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.656 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:56.656 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:56.656 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:56.656 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.656 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:56.656 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.656 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.656 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.656 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.656 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.656 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.656 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.656 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.656 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.656 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.656 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.656 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.656 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.656 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.656 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:56.656 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.656 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.916 nvme0n1 00:32:56.916 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.916 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.916 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.916 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.916 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.916 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.916 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.916 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.916 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.916 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.916 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.916 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:56.916 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.916 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:56.916 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.916 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:56.916 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:56.916 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:56.916 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmI3ZGM1ZjliMGI4NDI1MzkyMDE3NzE4YmI5NDQ4MWVgVxWS: 00:32:56.916 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: 00:32:56.916 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:56.916 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:56.916 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmI3ZGM1ZjliMGI4NDI1MzkyMDE3NzE4YmI5NDQ4MWVgVxWS: 00:32:56.916 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: ]] 00:32:56.916 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: 00:32:56.916 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:56.916 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.916 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:56.916 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:56.916 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:56.917 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.917 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:56.917 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.917 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.917 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.917 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.917 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.917 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.917 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.917 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.917 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.917 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.917 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.917 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.917 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.917 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.917 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:56.917 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.917 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.181 nvme0n1 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ1MWEzODZlMDI0YjhjNjY5ZWI4ZTY0NjRmMmZlMTY0Y2JlYWIxZTZlOGQ0ODA0xMaSjQ==: 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ1MWEzODZlMDI0YjhjNjY5ZWI4ZTY0NjRmMmZlMTY0Y2JlYWIxZTZlOGQ0ODA0xMaSjQ==: 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: ]] 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.181 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.439 nvme0n1 00:32:57.439 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.439 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.439 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.439 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.439 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.439 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.698 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.698 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.698 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.698 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.698 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.698 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.698 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:57.698 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.698 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:57.698 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:57.698 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:57.698 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzNlMWMxNmZkMmRjMjMzMWUzMDViOTQwNTFkMzUwMjIr1JuS: 00:32:57.698 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: 00:32:57.698 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:57.698 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:57.698 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzNlMWMxNmZkMmRjMjMzMWUzMDViOTQwNTFkMzUwMjIr1JuS: 00:32:57.698 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: ]] 00:32:57.699 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: 00:32:57.699 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:57.699 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.699 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:57.699 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:57.699 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:57.699 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.699 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:57.699 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.699 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.699 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.699 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.699 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.699 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.699 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.699 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.699 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.699 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.699 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.699 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.699 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.699 23:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.699 23:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:57.699 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.699 23:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.957 nvme0n1 00:32:57.957 23:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.957 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.957 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.957 23:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.957 23:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.957 23:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.957 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.957 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.957 23:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.957 23:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.957 23:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.957 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.957 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:57.957 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.957 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:57.957 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:57.957 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:57.957 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDIzOWJhYTJhZDZjYTFmYjMxNTk2ZmJiOTlkNjRmZjc5NjVmOTUzY2EzYTgwYzM3ue9IKA==: 00:32:57.957 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: 00:32:57.957 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:57.958 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:57.958 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDIzOWJhYTJhZDZjYTFmYjMxNTk2ZmJiOTlkNjRmZjc5NjVmOTUzY2EzYTgwYzM3ue9IKA==: 00:32:57.958 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: ]] 00:32:57.958 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: 00:32:57.958 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:57.958 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.958 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:57.958 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:57.958 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:57.958 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.958 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:57.958 23:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.958 23:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.958 23:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.958 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.958 23:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.958 23:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.958 23:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.958 23:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.958 23:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.958 23:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.958 23:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.958 23:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.958 23:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.958 23:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.958 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:57.958 23:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.958 23:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.216 nvme0n1 00:32:58.216 23:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.216 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.216 23:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.216 23:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.216 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.216 23:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.216 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.216 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.216 23:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.216 23:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.216 23:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.216 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.216 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:58.216 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.216 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:58.216 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:58.216 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:58.216 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGQzNGU1YTRmNjhjYTJkNGNkMDQ4M2Y4NzRjMTY1NjllZjYxYzFkYmRlOGFmMmI5YTkzODcwZTAwYTczZGQ2MWB9tso=: 00:32:58.216 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:58.216 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:58.216 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:58.216 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGQzNGU1YTRmNjhjYTJkNGNkMDQ4M2Y4NzRjMTY1NjllZjYxYzFkYmRlOGFmMmI5YTkzODcwZTAwYTczZGQ2MWB9tso=: 00:32:58.216 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:58.216 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:58.216 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.216 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:58.216 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:58.216 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:58.216 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.216 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:58.216 23:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.216 23:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.216 23:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.216 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.216 23:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.475 23:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.475 23:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.475 23:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.475 23:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.475 23:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.475 23:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.475 23:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.475 23:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.475 23:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.475 23:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:58.475 23:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.475 23:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.733 nvme0n1 00:32:58.733 23:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.733 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.733 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.733 23:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.733 23:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.733 23:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.733 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.733 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.733 23:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.733 23:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.733 23:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.733 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:58.733 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.733 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:58.733 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.733 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:58.733 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:58.733 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:58.733 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmI3ZGM1ZjliMGI4NDI1MzkyMDE3NzE4YmI5NDQ4MWVgVxWS: 00:32:58.733 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: 00:32:58.733 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:58.733 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:58.733 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmI3ZGM1ZjliMGI4NDI1MzkyMDE3NzE4YmI5NDQ4MWVgVxWS: 00:32:58.733 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: ]] 00:32:58.733 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: 00:32:58.734 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:58.734 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.734 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:58.734 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:58.734 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:58.734 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.734 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:58.734 23:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.734 23:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.734 23:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.734 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.734 23:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.734 23:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.734 23:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.734 23:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.734 23:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.734 23:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.734 23:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.734 23:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.734 23:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.734 23:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.734 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:58.734 23:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.734 23:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.302 nvme0n1 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ1MWEzODZlMDI0YjhjNjY5ZWI4ZTY0NjRmMmZlMTY0Y2JlYWIxZTZlOGQ0ODA0xMaSjQ==: 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ1MWEzODZlMDI0YjhjNjY5ZWI4ZTY0NjRmMmZlMTY0Y2JlYWIxZTZlOGQ0ODA0xMaSjQ==: 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: ]] 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.302 23:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.874 nvme0n1 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzNlMWMxNmZkMmRjMjMzMWUzMDViOTQwNTFkMzUwMjIr1JuS: 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzNlMWMxNmZkMmRjMjMzMWUzMDViOTQwNTFkMzUwMjIr1JuS: 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: ]] 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.874 23:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.440 nvme0n1 00:33:00.440 23:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.440 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.440 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.440 23:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.440 23:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.440 23:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.440 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.440 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.440 23:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.440 23:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.440 23:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.440 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.440 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:33:00.440 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.440 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:00.440 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:00.440 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:00.440 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDIzOWJhYTJhZDZjYTFmYjMxNTk2ZmJiOTlkNjRmZjc5NjVmOTUzY2EzYTgwYzM3ue9IKA==: 00:33:00.440 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: 00:33:00.440 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:00.440 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:00.440 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDIzOWJhYTJhZDZjYTFmYjMxNTk2ZmJiOTlkNjRmZjc5NjVmOTUzY2EzYTgwYzM3ue9IKA==: 00:33:00.440 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: ]] 00:33:00.440 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: 00:33:00.440 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:33:00.440 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.441 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:00.441 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:00.441 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:00.441 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.441 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:00.441 23:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.441 23:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.441 23:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.441 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.441 23:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.441 23:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.441 23:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.441 23:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.441 23:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.441 23:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.441 23:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.441 23:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.441 23:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.441 23:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.441 23:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:00.441 23:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.441 23:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.005 nvme0n1 00:33:01.005 23:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.005 23:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.005 23:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.005 23:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.005 23:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.005 23:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.005 23:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.005 23:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.005 23:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.005 23:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.005 23:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.005 23:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.005 23:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:33:01.005 23:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.005 23:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:01.005 23:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:01.005 23:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:01.005 23:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGQzNGU1YTRmNjhjYTJkNGNkMDQ4M2Y4NzRjMTY1NjllZjYxYzFkYmRlOGFmMmI5YTkzODcwZTAwYTczZGQ2MWB9tso=: 00:33:01.005 23:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:01.005 23:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:01.005 23:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:01.005 23:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGQzNGU1YTRmNjhjYTJkNGNkMDQ4M2Y4NzRjMTY1NjllZjYxYzFkYmRlOGFmMmI5YTkzODcwZTAwYTczZGQ2MWB9tso=: 00:33:01.264 23:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:01.264 23:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:33:01.264 23:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.264 23:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:01.264 23:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:01.264 23:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:01.264 23:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.264 23:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:01.264 23:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.264 23:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.264 23:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.264 23:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.264 23:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.264 23:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.264 23:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.264 23:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.264 23:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.264 23:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.264 23:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.264 23:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.264 23:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.264 23:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.264 23:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:01.264 23:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.264 23:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.832 nvme0n1 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmI3ZGM1ZjliMGI4NDI1MzkyMDE3NzE4YmI5NDQ4MWVgVxWS: 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmI3ZGM1ZjliMGI4NDI1MzkyMDE3NzE4YmI5NDQ4MWVgVxWS: 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: ]] 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.832 23:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.768 nvme0n1 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ1MWEzODZlMDI0YjhjNjY5ZWI4ZTY0NjRmMmZlMTY0Y2JlYWIxZTZlOGQ0ODA0xMaSjQ==: 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ1MWEzODZlMDI0YjhjNjY5ZWI4ZTY0NjRmMmZlMTY0Y2JlYWIxZTZlOGQ0ODA0xMaSjQ==: 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: ]] 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.768 23:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.703 nvme0n1 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzNlMWMxNmZkMmRjMjMzMWUzMDViOTQwNTFkMzUwMjIr1JuS: 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzNlMWMxNmZkMmRjMjMzMWUzMDViOTQwNTFkMzUwMjIr1JuS: 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: ]] 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.703 23:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.639 nvme0n1 00:33:04.639 23:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.639 23:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.639 23:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.639 23:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.639 23:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.639 23:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.639 23:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.639 23:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.639 23:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.639 23:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.639 23:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.639 23:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.639 23:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:33:04.639 23:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.639 23:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:04.639 23:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:04.639 23:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:04.639 23:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDIzOWJhYTJhZDZjYTFmYjMxNTk2ZmJiOTlkNjRmZjc5NjVmOTUzY2EzYTgwYzM3ue9IKA==: 00:33:04.639 23:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: 00:33:04.639 23:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:04.639 23:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:04.639 23:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDIzOWJhYTJhZDZjYTFmYjMxNTk2ZmJiOTlkNjRmZjc5NjVmOTUzY2EzYTgwYzM3ue9IKA==: 00:33:04.639 23:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: ]] 00:33:04.639 23:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: 00:33:04.639 23:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:33:04.639 23:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.639 23:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:04.639 23:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:04.639 23:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:04.639 23:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.639 23:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:04.639 23:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.639 23:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.639 23:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.639 23:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.640 23:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.640 23:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.640 23:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.640 23:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.640 23:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.640 23:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.640 23:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.640 23:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.640 23:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.640 23:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.640 23:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:04.640 23:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.640 23:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.017 nvme0n1 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGQzNGU1YTRmNjhjYTJkNGNkMDQ4M2Y4NzRjMTY1NjllZjYxYzFkYmRlOGFmMmI5YTkzODcwZTAwYTczZGQ2MWB9tso=: 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGQzNGU1YTRmNjhjYTJkNGNkMDQ4M2Y4NzRjMTY1NjllZjYxYzFkYmRlOGFmMmI5YTkzODcwZTAwYTczZGQ2MWB9tso=: 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.017 23:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.956 nvme0n1 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmI3ZGM1ZjliMGI4NDI1MzkyMDE3NzE4YmI5NDQ4MWVgVxWS: 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmI3ZGM1ZjliMGI4NDI1MzkyMDE3NzE4YmI5NDQ4MWVgVxWS: 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: ]] 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.956 nvme0n1 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ1MWEzODZlMDI0YjhjNjY5ZWI4ZTY0NjRmMmZlMTY0Y2JlYWIxZTZlOGQ0ODA0xMaSjQ==: 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ1MWEzODZlMDI0YjhjNjY5ZWI4ZTY0NjRmMmZlMTY0Y2JlYWIxZTZlOGQ0ODA0xMaSjQ==: 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: ]] 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.956 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.957 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.957 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.957 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.957 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.957 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.957 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.957 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:06.957 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.957 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.215 nvme0n1 00:33:07.215 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.215 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.215 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.215 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.215 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.215 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.215 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.215 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzNlMWMxNmZkMmRjMjMzMWUzMDViOTQwNTFkMzUwMjIr1JuS: 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzNlMWMxNmZkMmRjMjMzMWUzMDViOTQwNTFkMzUwMjIr1JuS: 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: ]] 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.216 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.474 nvme0n1 00:33:07.474 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.474 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.474 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.474 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.474 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.474 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.474 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.474 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.474 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.474 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.474 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.474 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.474 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:33:07.474 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.474 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:07.474 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:07.474 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:07.474 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDIzOWJhYTJhZDZjYTFmYjMxNTk2ZmJiOTlkNjRmZjc5NjVmOTUzY2EzYTgwYzM3ue9IKA==: 00:33:07.474 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: 00:33:07.474 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:07.474 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:07.474 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDIzOWJhYTJhZDZjYTFmYjMxNTk2ZmJiOTlkNjRmZjc5NjVmOTUzY2EzYTgwYzM3ue9IKA==: 00:33:07.474 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: ]] 00:33:07.474 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: 00:33:07.474 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:33:07.474 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.474 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:07.474 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:07.474 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:07.474 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.474 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:07.474 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.474 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.474 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.475 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.475 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.475 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.475 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.475 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.475 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.475 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.475 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.475 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.475 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.475 23:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.475 23:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:07.475 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.475 23:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.734 nvme0n1 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGQzNGU1YTRmNjhjYTJkNGNkMDQ4M2Y4NzRjMTY1NjllZjYxYzFkYmRlOGFmMmI5YTkzODcwZTAwYTczZGQ2MWB9tso=: 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGQzNGU1YTRmNjhjYTJkNGNkMDQ4M2Y4NzRjMTY1NjllZjYxYzFkYmRlOGFmMmI5YTkzODcwZTAwYTczZGQ2MWB9tso=: 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.734 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.993 nvme0n1 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmI3ZGM1ZjliMGI4NDI1MzkyMDE3NzE4YmI5NDQ4MWVgVxWS: 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmI3ZGM1ZjliMGI4NDI1MzkyMDE3NzE4YmI5NDQ4MWVgVxWS: 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: ]] 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.993 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.252 nvme0n1 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ1MWEzODZlMDI0YjhjNjY5ZWI4ZTY0NjRmMmZlMTY0Y2JlYWIxZTZlOGQ0ODA0xMaSjQ==: 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ1MWEzODZlMDI0YjhjNjY5ZWI4ZTY0NjRmMmZlMTY0Y2JlYWIxZTZlOGQ0ODA0xMaSjQ==: 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: ]] 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.252 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.511 nvme0n1 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzNlMWMxNmZkMmRjMjMzMWUzMDViOTQwNTFkMzUwMjIr1JuS: 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzNlMWMxNmZkMmRjMjMzMWUzMDViOTQwNTFkMzUwMjIr1JuS: 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: ]] 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.511 23:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.770 nvme0n1 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDIzOWJhYTJhZDZjYTFmYjMxNTk2ZmJiOTlkNjRmZjc5NjVmOTUzY2EzYTgwYzM3ue9IKA==: 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDIzOWJhYTJhZDZjYTFmYjMxNTk2ZmJiOTlkNjRmZjc5NjVmOTUzY2EzYTgwYzM3ue9IKA==: 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: ]] 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.770 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.030 nvme0n1 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGQzNGU1YTRmNjhjYTJkNGNkMDQ4M2Y4NzRjMTY1NjllZjYxYzFkYmRlOGFmMmI5YTkzODcwZTAwYTczZGQ2MWB9tso=: 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGQzNGU1YTRmNjhjYTJkNGNkMDQ4M2Y4NzRjMTY1NjllZjYxYzFkYmRlOGFmMmI5YTkzODcwZTAwYTczZGQ2MWB9tso=: 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.030 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.328 nvme0n1 00:33:09.328 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.328 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.328 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.328 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.328 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.328 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.328 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.328 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.328 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.328 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.328 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.328 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:09.328 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.328 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:33:09.328 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.328 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:09.328 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:09.328 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:09.328 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmI3ZGM1ZjliMGI4NDI1MzkyMDE3NzE4YmI5NDQ4MWVgVxWS: 00:33:09.328 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: 00:33:09.328 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:09.328 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:09.328 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmI3ZGM1ZjliMGI4NDI1MzkyMDE3NzE4YmI5NDQ4MWVgVxWS: 00:33:09.328 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: ]] 00:33:09.328 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: 00:33:09.328 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:33:09.328 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.328 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:09.328 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:09.328 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:09.328 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.328 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:09.329 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.329 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.329 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.329 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.329 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.329 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.329 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.329 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.329 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.329 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.329 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.329 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.329 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.329 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.329 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:09.329 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.329 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.588 nvme0n1 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ1MWEzODZlMDI0YjhjNjY5ZWI4ZTY0NjRmMmZlMTY0Y2JlYWIxZTZlOGQ0ODA0xMaSjQ==: 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ1MWEzODZlMDI0YjhjNjY5ZWI4ZTY0NjRmMmZlMTY0Y2JlYWIxZTZlOGQ0ODA0xMaSjQ==: 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: ]] 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.588 23:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.847 nvme0n1 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzNlMWMxNmZkMmRjMjMzMWUzMDViOTQwNTFkMzUwMjIr1JuS: 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzNlMWMxNmZkMmRjMjMzMWUzMDViOTQwNTFkMzUwMjIr1JuS: 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: ]] 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.848 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:10.108 23:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.108 23:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.368 nvme0n1 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDIzOWJhYTJhZDZjYTFmYjMxNTk2ZmJiOTlkNjRmZjc5NjVmOTUzY2EzYTgwYzM3ue9IKA==: 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDIzOWJhYTJhZDZjYTFmYjMxNTk2ZmJiOTlkNjRmZjc5NjVmOTUzY2EzYTgwYzM3ue9IKA==: 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: ]] 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.368 23:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.626 nvme0n1 00:33:10.626 23:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.626 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.626 23:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.626 23:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.626 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.626 23:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.626 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.626 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.626 23:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.626 23:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.626 23:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.626 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:10.626 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:33:10.626 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.626 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:10.626 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:10.626 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:10.627 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGQzNGU1YTRmNjhjYTJkNGNkMDQ4M2Y4NzRjMTY1NjllZjYxYzFkYmRlOGFmMmI5YTkzODcwZTAwYTczZGQ2MWB9tso=: 00:33:10.627 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:10.627 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:10.627 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:10.627 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGQzNGU1YTRmNjhjYTJkNGNkMDQ4M2Y4NzRjMTY1NjllZjYxYzFkYmRlOGFmMmI5YTkzODcwZTAwYTczZGQ2MWB9tso=: 00:33:10.627 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:10.627 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:33:10.627 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:10.627 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:10.627 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:10.627 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:10.627 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:10.627 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:10.627 23:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.627 23:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.627 23:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.627 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:10.627 23:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:10.627 23:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:10.627 23:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:10.627 23:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.627 23:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.627 23:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:10.627 23:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.627 23:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:10.627 23:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:10.627 23:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:10.627 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:10.627 23:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.627 23:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.886 nvme0n1 00:33:10.886 23:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.886 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.886 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.886 23:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.886 23:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.886 23:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmI3ZGM1ZjliMGI4NDI1MzkyMDE3NzE4YmI5NDQ4MWVgVxWS: 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmI3ZGM1ZjliMGI4NDI1MzkyMDE3NzE4YmI5NDQ4MWVgVxWS: 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: ]] 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.146 23:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.716 nvme0n1 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ1MWEzODZlMDI0YjhjNjY5ZWI4ZTY0NjRmMmZlMTY0Y2JlYWIxZTZlOGQ0ODA0xMaSjQ==: 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ1MWEzODZlMDI0YjhjNjY5ZWI4ZTY0NjRmMmZlMTY0Y2JlYWIxZTZlOGQ0ODA0xMaSjQ==: 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: ]] 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.716 23:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.284 nvme0n1 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzNlMWMxNmZkMmRjMjMzMWUzMDViOTQwNTFkMzUwMjIr1JuS: 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzNlMWMxNmZkMmRjMjMzMWUzMDViOTQwNTFkMzUwMjIr1JuS: 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: ]] 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.284 23:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.852 nvme0n1 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDIzOWJhYTJhZDZjYTFmYjMxNTk2ZmJiOTlkNjRmZjc5NjVmOTUzY2EzYTgwYzM3ue9IKA==: 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDIzOWJhYTJhZDZjYTFmYjMxNTk2ZmJiOTlkNjRmZjc5NjVmOTUzY2EzYTgwYzM3ue9IKA==: 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: ]] 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.852 23:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.420 nvme0n1 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGQzNGU1YTRmNjhjYTJkNGNkMDQ4M2Y4NzRjMTY1NjllZjYxYzFkYmRlOGFmMmI5YTkzODcwZTAwYTczZGQ2MWB9tso=: 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGQzNGU1YTRmNjhjYTJkNGNkMDQ4M2Y4NzRjMTY1NjllZjYxYzFkYmRlOGFmMmI5YTkzODcwZTAwYTczZGQ2MWB9tso=: 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.420 23:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.989 nvme0n1 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmI3ZGM1ZjliMGI4NDI1MzkyMDE3NzE4YmI5NDQ4MWVgVxWS: 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmI3ZGM1ZjliMGI4NDI1MzkyMDE3NzE4YmI5NDQ4MWVgVxWS: 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: ]] 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.989 23:03:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.924 nvme0n1 00:33:14.924 23:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.924 23:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:14.924 23:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:14.924 23:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.924 23:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.924 23:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.924 23:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.924 23:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:14.924 23:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.924 23:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.924 23:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.924 23:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:14.924 23:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:33:14.924 23:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:14.924 23:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:14.924 23:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:14.924 23:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:14.925 23:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ1MWEzODZlMDI0YjhjNjY5ZWI4ZTY0NjRmMmZlMTY0Y2JlYWIxZTZlOGQ0ODA0xMaSjQ==: 00:33:14.925 23:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: 00:33:14.925 23:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:14.925 23:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:14.925 23:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ1MWEzODZlMDI0YjhjNjY5ZWI4ZTY0NjRmMmZlMTY0Y2JlYWIxZTZlOGQ0ODA0xMaSjQ==: 00:33:14.925 23:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: ]] 00:33:14.925 23:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: 00:33:14.925 23:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:33:14.925 23:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:14.925 23:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:14.925 23:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:14.925 23:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:14.925 23:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:14.925 23:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:14.925 23:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.925 23:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.925 23:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.925 23:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:14.925 23:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:14.925 23:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:14.925 23:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:14.925 23:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:14.925 23:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:14.925 23:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:14.925 23:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:14.925 23:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:14.925 23:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:14.925 23:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:14.925 23:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:14.925 23:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.925 23:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.858 nvme0n1 00:33:15.858 23:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.858 23:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:15.858 23:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:15.858 23:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.858 23:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.858 23:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzNlMWMxNmZkMmRjMjMzMWUzMDViOTQwNTFkMzUwMjIr1JuS: 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzNlMWMxNmZkMmRjMjMzMWUzMDViOTQwNTFkMzUwMjIr1JuS: 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: ]] 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.117 23:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.054 nvme0n1 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDIzOWJhYTJhZDZjYTFmYjMxNTk2ZmJiOTlkNjRmZjc5NjVmOTUzY2EzYTgwYzM3ue9IKA==: 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDIzOWJhYTJhZDZjYTFmYjMxNTk2ZmJiOTlkNjRmZjc5NjVmOTUzY2EzYTgwYzM3ue9IKA==: 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: ]] 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.054 23:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.995 nvme0n1 00:33:17.995 23:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.995 23:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.995 23:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:17.995 23:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.995 23:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.995 23:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.995 23:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.995 23:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:17.995 23:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.995 23:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.995 23:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.995 23:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:17.995 23:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:33:17.995 23:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:17.995 23:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:17.995 23:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:17.995 23:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:17.995 23:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGQzNGU1YTRmNjhjYTJkNGNkMDQ4M2Y4NzRjMTY1NjllZjYxYzFkYmRlOGFmMmI5YTkzODcwZTAwYTczZGQ2MWB9tso=: 00:33:17.995 23:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:17.995 23:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:17.995 23:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:17.995 23:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGQzNGU1YTRmNjhjYTJkNGNkMDQ4M2Y4NzRjMTY1NjllZjYxYzFkYmRlOGFmMmI5YTkzODcwZTAwYTczZGQ2MWB9tso=: 00:33:17.995 23:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:17.995 23:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:33:17.995 23:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:17.995 23:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:17.996 23:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:17.996 23:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:17.996 23:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:17.996 23:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:17.996 23:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.996 23:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.996 23:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.996 23:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:17.996 23:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:17.996 23:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:17.996 23:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:17.996 23:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:17.996 23:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:17.996 23:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:17.996 23:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:17.996 23:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:17.996 23:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:17.996 23:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:17.996 23:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:17.996 23:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.996 23:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.934 nvme0n1 00:33:18.934 23:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.934 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:18.934 23:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.934 23:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.934 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:18.934 23:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.193 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.193 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.193 23:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.193 23:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.193 23:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.193 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:19.193 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:19.193 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.193 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:33:19.193 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.193 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:19.193 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:19.193 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:19.193 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmI3ZGM1ZjliMGI4NDI1MzkyMDE3NzE4YmI5NDQ4MWVgVxWS: 00:33:19.193 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: 00:33:19.193 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:19.193 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:19.193 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmI3ZGM1ZjliMGI4NDI1MzkyMDE3NzE4YmI5NDQ4MWVgVxWS: 00:33:19.193 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: ]] 00:33:19.193 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: 00:33:19.193 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:33:19.193 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.193 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:19.193 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:19.193 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:19.193 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.193 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:19.193 23:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.193 23:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.193 23:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.193 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.193 23:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:19.193 23:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:19.193 23:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:19.193 23:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.194 23:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.194 23:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:19.194 23:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:19.194 23:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:19.194 23:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:19.194 23:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:19.194 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:19.194 23:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.194 23:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.194 nvme0n1 00:33:19.194 23:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.194 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.194 23:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.194 23:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.194 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.194 23:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.194 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.194 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.194 23:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.194 23:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.453 23:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.453 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.453 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:33:19.453 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.453 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:19.453 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:19.453 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:19.453 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ1MWEzODZlMDI0YjhjNjY5ZWI4ZTY0NjRmMmZlMTY0Y2JlYWIxZTZlOGQ0ODA0xMaSjQ==: 00:33:19.453 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: 00:33:19.453 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:19.453 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:19.453 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ1MWEzODZlMDI0YjhjNjY5ZWI4ZTY0NjRmMmZlMTY0Y2JlYWIxZTZlOGQ0ODA0xMaSjQ==: 00:33:19.453 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: ]] 00:33:19.453 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: 00:33:19.453 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:33:19.453 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.453 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:19.453 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:19.453 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:19.453 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.453 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:19.453 23:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.453 23:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.453 23:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.453 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.453 23:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.454 nvme0n1 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzNlMWMxNmZkMmRjMjMzMWUzMDViOTQwNTFkMzUwMjIr1JuS: 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzNlMWMxNmZkMmRjMjMzMWUzMDViOTQwNTFkMzUwMjIr1JuS: 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: ]] 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.454 23:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.712 23:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:19.713 23:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:19.713 23:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:19.713 23:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:19.713 23:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:19.713 23:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:19.713 23:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.713 23:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.713 nvme0n1 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDIzOWJhYTJhZDZjYTFmYjMxNTk2ZmJiOTlkNjRmZjc5NjVmOTUzY2EzYTgwYzM3ue9IKA==: 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDIzOWJhYTJhZDZjYTFmYjMxNTk2ZmJiOTlkNjRmZjc5NjVmOTUzY2EzYTgwYzM3ue9IKA==: 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: ]] 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.713 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.971 nvme0n1 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGQzNGU1YTRmNjhjYTJkNGNkMDQ4M2Y4NzRjMTY1NjllZjYxYzFkYmRlOGFmMmI5YTkzODcwZTAwYTczZGQ2MWB9tso=: 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGQzNGU1YTRmNjhjYTJkNGNkMDQ4M2Y4NzRjMTY1NjllZjYxYzFkYmRlOGFmMmI5YTkzODcwZTAwYTczZGQ2MWB9tso=: 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.971 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.230 nvme0n1 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmI3ZGM1ZjliMGI4NDI1MzkyMDE3NzE4YmI5NDQ4MWVgVxWS: 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmI3ZGM1ZjliMGI4NDI1MzkyMDE3NzE4YmI5NDQ4MWVgVxWS: 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: ]] 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.230 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.488 nvme0n1 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ1MWEzODZlMDI0YjhjNjY5ZWI4ZTY0NjRmMmZlMTY0Y2JlYWIxZTZlOGQ0ODA0xMaSjQ==: 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ1MWEzODZlMDI0YjhjNjY5ZWI4ZTY0NjRmMmZlMTY0Y2JlYWIxZTZlOGQ0ODA0xMaSjQ==: 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: ]] 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:20.488 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:20.489 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:20.489 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:20.489 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:20.489 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:20.489 23:03:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:20.489 23:03:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:20.489 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.489 23:03:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.748 nvme0n1 00:33:20.748 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.748 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:20.748 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.748 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.748 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:20.748 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.748 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:20.748 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:20.748 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.748 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.748 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.748 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:20.748 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:33:20.748 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:20.748 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:20.748 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:20.748 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:20.748 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzNlMWMxNmZkMmRjMjMzMWUzMDViOTQwNTFkMzUwMjIr1JuS: 00:33:20.748 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: 00:33:20.748 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:20.748 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:20.748 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzNlMWMxNmZkMmRjMjMzMWUzMDViOTQwNTFkMzUwMjIr1JuS: 00:33:20.748 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: ]] 00:33:20.748 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: 00:33:20.748 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:33:20.748 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:20.748 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:20.748 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:20.748 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:20.748 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:20.748 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:20.748 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.748 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.749 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.749 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:20.749 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:20.749 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:20.749 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:20.749 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:20.749 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:20.749 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:20.749 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:20.749 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:20.749 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:20.749 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:20.749 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:20.749 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.749 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.008 nvme0n1 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDIzOWJhYTJhZDZjYTFmYjMxNTk2ZmJiOTlkNjRmZjc5NjVmOTUzY2EzYTgwYzM3ue9IKA==: 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDIzOWJhYTJhZDZjYTFmYjMxNTk2ZmJiOTlkNjRmZjc5NjVmOTUzY2EzYTgwYzM3ue9IKA==: 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: ]] 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.008 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.266 nvme0n1 00:33:21.266 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.266 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:21.266 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.266 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.266 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:21.266 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.266 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.266 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:21.266 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGQzNGU1YTRmNjhjYTJkNGNkMDQ4M2Y4NzRjMTY1NjllZjYxYzFkYmRlOGFmMmI5YTkzODcwZTAwYTczZGQ2MWB9tso=: 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGQzNGU1YTRmNjhjYTJkNGNkMDQ4M2Y4NzRjMTY1NjllZjYxYzFkYmRlOGFmMmI5YTkzODcwZTAwYTczZGQ2MWB9tso=: 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.267 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.526 nvme0n1 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmI3ZGM1ZjliMGI4NDI1MzkyMDE3NzE4YmI5NDQ4MWVgVxWS: 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmI3ZGM1ZjliMGI4NDI1MzkyMDE3NzE4YmI5NDQ4MWVgVxWS: 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: ]] 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.526 23:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.785 nvme0n1 00:33:21.785 23:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.785 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:21.785 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:21.785 23:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.785 23:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.785 23:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.785 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.785 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:21.785 23:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.785 23:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.785 23:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.785 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:21.785 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:33:21.785 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:21.785 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:21.785 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:21.785 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:21.785 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ1MWEzODZlMDI0YjhjNjY5ZWI4ZTY0NjRmMmZlMTY0Y2JlYWIxZTZlOGQ0ODA0xMaSjQ==: 00:33:21.785 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: 00:33:21.785 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:21.785 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:21.785 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ1MWEzODZlMDI0YjhjNjY5ZWI4ZTY0NjRmMmZlMTY0Y2JlYWIxZTZlOGQ0ODA0xMaSjQ==: 00:33:21.785 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: ]] 00:33:21.785 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: 00:33:21.785 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:33:21.785 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:21.785 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:21.785 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:21.785 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:21.785 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:21.785 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:21.785 23:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.785 23:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.045 23:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.045 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:22.045 23:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:22.045 23:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:22.045 23:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:22.045 23:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:22.045 23:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:22.045 23:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:22.045 23:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:22.045 23:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:22.045 23:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:22.045 23:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:22.045 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:22.045 23:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.045 23:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.305 nvme0n1 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzNlMWMxNmZkMmRjMjMzMWUzMDViOTQwNTFkMzUwMjIr1JuS: 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzNlMWMxNmZkMmRjMjMzMWUzMDViOTQwNTFkMzUwMjIr1JuS: 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: ]] 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.305 23:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.564 nvme0n1 00:33:22.564 23:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.564 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:22.564 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:22.564 23:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.564 23:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.564 23:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.564 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.564 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:22.564 23:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.564 23:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.565 23:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.565 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:22.565 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:33:22.565 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:22.565 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:22.565 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:22.565 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:22.565 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDIzOWJhYTJhZDZjYTFmYjMxNTk2ZmJiOTlkNjRmZjc5NjVmOTUzY2EzYTgwYzM3ue9IKA==: 00:33:22.565 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: 00:33:22.565 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:22.565 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:22.565 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDIzOWJhYTJhZDZjYTFmYjMxNTk2ZmJiOTlkNjRmZjc5NjVmOTUzY2EzYTgwYzM3ue9IKA==: 00:33:22.565 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: ]] 00:33:22.565 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: 00:33:22.565 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:33:22.565 23:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:22.565 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:22.565 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:22.565 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:22.565 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:22.565 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:22.565 23:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.565 23:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.565 23:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.565 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:22.565 23:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:22.565 23:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:22.565 23:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:22.565 23:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:22.565 23:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:22.565 23:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:22.565 23:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:22.565 23:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:22.565 23:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:22.565 23:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:22.565 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:22.565 23:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.565 23:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.822 nvme0n1 00:33:22.822 23:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.822 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:22.822 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:22.822 23:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.822 23:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.822 23:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGQzNGU1YTRmNjhjYTJkNGNkMDQ4M2Y4NzRjMTY1NjllZjYxYzFkYmRlOGFmMmI5YTkzODcwZTAwYTczZGQ2MWB9tso=: 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGQzNGU1YTRmNjhjYTJkNGNkMDQ4M2Y4NzRjMTY1NjllZjYxYzFkYmRlOGFmMmI5YTkzODcwZTAwYTczZGQ2MWB9tso=: 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.082 23:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.343 nvme0n1 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmI3ZGM1ZjliMGI4NDI1MzkyMDE3NzE4YmI5NDQ4MWVgVxWS: 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmI3ZGM1ZjliMGI4NDI1MzkyMDE3NzE4YmI5NDQ4MWVgVxWS: 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: ]] 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:23.343 23:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:23.344 23:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:23.344 23:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:23.344 23:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:23.344 23:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:23.344 23:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:23.344 23:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:23.344 23:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.344 23:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.985 nvme0n1 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ1MWEzODZlMDI0YjhjNjY5ZWI4ZTY0NjRmMmZlMTY0Y2JlYWIxZTZlOGQ0ODA0xMaSjQ==: 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ1MWEzODZlMDI0YjhjNjY5ZWI4ZTY0NjRmMmZlMTY0Y2JlYWIxZTZlOGQ0ODA0xMaSjQ==: 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: ]] 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.985 23:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.551 nvme0n1 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzNlMWMxNmZkMmRjMjMzMWUzMDViOTQwNTFkMzUwMjIr1JuS: 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzNlMWMxNmZkMmRjMjMzMWUzMDViOTQwNTFkMzUwMjIr1JuS: 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: ]] 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.551 23:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.121 nvme0n1 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDIzOWJhYTJhZDZjYTFmYjMxNTk2ZmJiOTlkNjRmZjc5NjVmOTUzY2EzYTgwYzM3ue9IKA==: 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDIzOWJhYTJhZDZjYTFmYjMxNTk2ZmJiOTlkNjRmZjc5NjVmOTUzY2EzYTgwYzM3ue9IKA==: 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: ]] 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.121 23:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.691 nvme0n1 00:33:25.691 23:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.691 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:25.691 23:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.691 23:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.691 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:25.691 23:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.691 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:25.691 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:25.691 23:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.691 23:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.950 23:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.950 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:25.950 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:33:25.950 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:25.950 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:25.950 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:25.950 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:25.950 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGQzNGU1YTRmNjhjYTJkNGNkMDQ4M2Y4NzRjMTY1NjllZjYxYzFkYmRlOGFmMmI5YTkzODcwZTAwYTczZGQ2MWB9tso=: 00:33:25.950 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:25.950 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:25.950 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:25.950 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGQzNGU1YTRmNjhjYTJkNGNkMDQ4M2Y4NzRjMTY1NjllZjYxYzFkYmRlOGFmMmI5YTkzODcwZTAwYTczZGQ2MWB9tso=: 00:33:25.950 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:25.950 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:33:25.950 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:25.950 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:25.950 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:25.950 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:25.950 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:25.950 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:25.950 23:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.950 23:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.951 23:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.951 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:25.951 23:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:25.951 23:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:25.951 23:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:25.951 23:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:25.951 23:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:25.951 23:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:25.951 23:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:25.951 23:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:25.951 23:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:25.951 23:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:25.951 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:25.951 23:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.951 23:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.518 nvme0n1 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmI3ZGM1ZjliMGI4NDI1MzkyMDE3NzE4YmI5NDQ4MWVgVxWS: 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmI3ZGM1ZjliMGI4NDI1MzkyMDE3NzE4YmI5NDQ4MWVgVxWS: 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: ]] 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGVhNDRmMzYzYjRmODhlNDVlMjczY2Y0ZTcxNGIwMGU0NmE3YTZhNzNjZTU4MWVhYmFjMDQwNjIwNjNkMzJjOIytsw8=: 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.518 23:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.455 nvme0n1 00:33:27.455 23:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.455 23:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:27.455 23:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.455 23:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.455 23:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:27.455 23:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.455 23:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:27.455 23:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:27.455 23:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.455 23:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.455 23:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.455 23:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:27.455 23:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:33:27.455 23:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:27.455 23:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:27.455 23:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:27.455 23:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:27.455 23:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ1MWEzODZlMDI0YjhjNjY5ZWI4ZTY0NjRmMmZlMTY0Y2JlYWIxZTZlOGQ0ODA0xMaSjQ==: 00:33:27.455 23:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: 00:33:27.455 23:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:27.456 23:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:27.456 23:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ1MWEzODZlMDI0YjhjNjY5ZWI4ZTY0NjRmMmZlMTY0Y2JlYWIxZTZlOGQ0ODA0xMaSjQ==: 00:33:27.456 23:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: ]] 00:33:27.456 23:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: 00:33:27.456 23:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:33:27.456 23:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:27.456 23:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:27.456 23:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:27.456 23:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:27.456 23:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:27.456 23:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:27.456 23:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.456 23:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.456 23:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.456 23:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:27.456 23:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:27.456 23:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:27.456 23:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:27.456 23:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:27.456 23:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:27.456 23:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:27.456 23:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:27.456 23:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:27.456 23:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:27.456 23:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:27.456 23:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:27.456 23:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.456 23:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.395 nvme0n1 00:33:28.395 23:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.395 23:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:28.395 23:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:28.395 23:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.395 23:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.395 23:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.395 23:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:28.395 23:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:28.395 23:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.395 23:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.655 23:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.655 23:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:28.655 23:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:33:28.655 23:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:28.655 23:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:28.655 23:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:28.655 23:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:28.655 23:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzNlMWMxNmZkMmRjMjMzMWUzMDViOTQwNTFkMzUwMjIr1JuS: 00:33:28.655 23:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: 00:33:28.655 23:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:28.655 23:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:28.655 23:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzNlMWMxNmZkMmRjMjMzMWUzMDViOTQwNTFkMzUwMjIr1JuS: 00:33:28.655 23:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: ]] 00:33:28.655 23:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU4ZDBjNTZjMDI3Y2EyMTE3MjdlYjA2NWI3NTM0NWFoqBO2: 00:33:28.655 23:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:33:28.655 23:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:28.655 23:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:28.655 23:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:28.655 23:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:28.655 23:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:28.655 23:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:28.655 23:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.655 23:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.655 23:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.655 23:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:28.655 23:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:28.655 23:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:28.655 23:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:28.655 23:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:28.655 23:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:28.655 23:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:28.655 23:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:28.655 23:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:28.655 23:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:28.656 23:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:28.656 23:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:28.656 23:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.656 23:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.595 nvme0n1 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDIzOWJhYTJhZDZjYTFmYjMxNTk2ZmJiOTlkNjRmZjc5NjVmOTUzY2EzYTgwYzM3ue9IKA==: 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDIzOWJhYTJhZDZjYTFmYjMxNTk2ZmJiOTlkNjRmZjc5NjVmOTUzY2EzYTgwYzM3ue9IKA==: 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: ]] 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGZjZjgwYTFmMDljMGUyODA4OGQ2MDk0NjM0Mzc0NDRh51Gi: 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.595 23:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.536 nvme0n1 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGQzNGU1YTRmNjhjYTJkNGNkMDQ4M2Y4NzRjMTY1NjllZjYxYzFkYmRlOGFmMmI5YTkzODcwZTAwYTczZGQ2MWB9tso=: 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGQzNGU1YTRmNjhjYTJkNGNkMDQ4M2Y4NzRjMTY1NjllZjYxYzFkYmRlOGFmMmI5YTkzODcwZTAwYTczZGQ2MWB9tso=: 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.536 23:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.476 nvme0n1 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ1MWEzODZlMDI0YjhjNjY5ZWI4ZTY0NjRmMmZlMTY0Y2JlYWIxZTZlOGQ0ODA0xMaSjQ==: 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ1MWEzODZlMDI0YjhjNjY5ZWI4ZTY0NjRmMmZlMTY0Y2JlYWIxZTZlOGQ0ODA0xMaSjQ==: 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: ]] 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTBhNjg0YzZjZWI1ZWZjNzZmOTIwZDg0NjU4MDllYTc0MGI0OTllNTMxNzM5YjRlwI93Iw==: 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:31.476 23:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:31.477 23:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:31.477 23:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:31.477 23:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.477 23:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.477 request: 00:33:31.477 { 00:33:31.477 "name": "nvme0", 00:33:31.477 "trtype": "tcp", 00:33:31.477 "traddr": "10.0.0.1", 00:33:31.477 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:31.477 "adrfam": "ipv4", 00:33:31.477 "trsvcid": "4420", 00:33:31.477 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:31.477 "method": "bdev_nvme_attach_controller", 00:33:31.477 "req_id": 1 00:33:31.477 } 00:33:31.477 Got JSON-RPC error response 00:33:31.477 response: 00:33:31.477 { 00:33:31.477 "code": -5, 00:33:31.477 "message": "Input/output error" 00:33:31.477 } 00:33:31.477 23:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:31.477 23:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:31.477 23:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:31.477 23:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:31.477 23:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:31.477 23:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:33:31.477 23:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:33:31.477 23:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.477 23:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.477 23:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.477 23:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:33:31.477 23:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:33:31.477 23:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:31.477 23:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:31.477 23:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:31.477 23:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:31.477 23:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:31.477 23:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:31.477 23:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:31.477 23:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:31.477 23:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:31.477 23:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:31.477 23:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:31.477 23:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:31.477 23:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:31.477 23:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:31.477 23:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:31.477 23:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:31.477 23:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:31.477 23:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:31.477 23:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.477 23:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.737 request: 00:33:31.737 { 00:33:31.737 "name": "nvme0", 00:33:31.737 "trtype": "tcp", 00:33:31.737 "traddr": "10.0.0.1", 00:33:31.737 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:31.737 "adrfam": "ipv4", 00:33:31.737 "trsvcid": "4420", 00:33:31.737 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:31.737 "dhchap_key": "key2", 00:33:31.737 "method": "bdev_nvme_attach_controller", 00:33:31.737 "req_id": 1 00:33:31.737 } 00:33:31.737 Got JSON-RPC error response 00:33:31.737 response: 00:33:31.737 { 00:33:31.737 "code": -5, 00:33:31.737 "message": "Input/output error" 00:33:31.737 } 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.737 request: 00:33:31.737 { 00:33:31.737 "name": "nvme0", 00:33:31.737 "trtype": "tcp", 00:33:31.737 "traddr": "10.0.0.1", 00:33:31.737 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:31.737 "adrfam": "ipv4", 00:33:31.737 "trsvcid": "4420", 00:33:31.737 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:31.737 "dhchap_key": "key1", 00:33:31.737 "dhchap_ctrlr_key": "ckey2", 00:33:31.737 "method": "bdev_nvme_attach_controller", 00:33:31.737 "req_id": 1 00:33:31.737 } 00:33:31.737 Got JSON-RPC error response 00:33:31.737 response: 00:33:31.737 { 00:33:31.737 "code": -5, 00:33:31.737 "message": "Input/output error" 00:33:31.737 } 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:31.737 rmmod nvme_tcp 00:33:31.737 rmmod nvme_fabrics 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3679838 ']' 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3679838 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 3679838 ']' 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 3679838 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:31.737 23:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3679838 00:33:31.997 23:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:31.997 23:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:31.997 23:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3679838' 00:33:31.997 killing process with pid 3679838 00:33:31.997 23:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 3679838 00:33:31.997 23:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 3679838 00:33:31.997 23:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:31.997 23:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:31.997 23:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:31.997 23:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:31.997 23:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:31.997 23:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:31.997 23:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:31.997 23:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:34.530 23:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:34.530 23:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:34.530 23:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:34.530 23:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:33:34.530 23:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:33:34.530 23:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:33:34.530 23:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:34.530 23:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:34.530 23:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:34.530 23:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:34.530 23:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:34.530 23:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:34.530 23:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:35.464 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:35.464 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:35.464 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:35.464 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:35.464 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:35.464 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:35.464 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:35.464 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:35.464 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:35.464 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:35.464 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:35.464 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:35.464 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:35.464 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:35.464 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:35.464 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:36.400 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:36.400 23:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.yob /tmp/spdk.key-null.zRV /tmp/spdk.key-sha256.bPS /tmp/spdk.key-sha384.J0V /tmp/spdk.key-sha512.k1V /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:33:36.400 23:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:37.335 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:37.335 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:37.335 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:37.335 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:37.335 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:37.335 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:37.335 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:37.335 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:37.335 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:37.335 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:37.335 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:37.335 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:37.335 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:37.335 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:37.335 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:37.335 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:37.335 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:37.594 00:33:37.594 real 0m49.440s 00:33:37.594 user 0m47.256s 00:33:37.594 sys 0m5.567s 00:33:37.594 23:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:37.594 23:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.594 ************************************ 00:33:37.594 END TEST nvmf_auth_host 00:33:37.594 ************************************ 00:33:37.594 23:03:30 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:33:37.594 23:03:30 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:37.594 23:03:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:37.594 23:03:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:37.594 23:03:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:37.594 ************************************ 00:33:37.594 START TEST nvmf_digest 00:33:37.594 ************************************ 00:33:37.594 23:03:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:37.594 * Looking for test storage... 00:33:37.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:37.594 23:03:30 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:37.594 23:03:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:33:37.594 23:03:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:37.594 23:03:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:37.594 23:03:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:37.594 23:03:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:37.594 23:03:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:37.594 23:03:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:37.594 23:03:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:37.594 23:03:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:37.594 23:03:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:33:37.853 23:03:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:39.759 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:39.759 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:39.759 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:39.759 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:39.759 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:39.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:39.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:33:39.760 00:33:39.760 --- 10.0.0.2 ping statistics --- 00:33:39.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:39.760 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:39.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:39.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:33:39.760 00:33:39.760 --- 10.0.0.1 ping statistics --- 00:33:39.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:39.760 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:39.760 ************************************ 00:33:39.760 START TEST nvmf_digest_clean 00:33:39.760 ************************************ 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3689991 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3689991 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3689991 ']' 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:39.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:39.760 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:40.018 [2024-07-26 23:03:32.282173] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:40.018 [2024-07-26 23:03:32.282256] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:40.018 EAL: No free 2048 kB hugepages reported on node 1 00:33:40.018 [2024-07-26 23:03:32.359710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:40.018 [2024-07-26 23:03:32.453513] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:40.018 [2024-07-26 23:03:32.453591] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:40.018 [2024-07-26 23:03:32.453622] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:40.018 [2024-07-26 23:03:32.453644] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:40.018 [2024-07-26 23:03:32.453664] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:40.018 [2024-07-26 23:03:32.453700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:40.275 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:40.275 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:40.275 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:40.275 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:40.275 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:40.275 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:40.275 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:33:40.275 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:33:40.275 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:33:40.275 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.275 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:40.275 null0 00:33:40.275 [2024-07-26 23:03:32.698004] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:40.275 [2024-07-26 23:03:32.722230] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:40.275 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.275 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:33:40.275 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:40.275 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:40.275 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:40.275 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:40.275 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:40.275 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:40.275 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3690017 00:33:40.275 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:40.275 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3690017 /var/tmp/bperf.sock 00:33:40.275 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3690017 ']' 00:33:40.275 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:40.275 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:40.275 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:40.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:40.275 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:40.275 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:40.275 [2024-07-26 23:03:32.769538] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:40.275 [2024-07-26 23:03:32.769613] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3690017 ] 00:33:40.533 EAL: No free 2048 kB hugepages reported on node 1 00:33:40.533 [2024-07-26 23:03:32.836418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:40.533 [2024-07-26 23:03:32.929573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:40.533 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:40.533 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:40.533 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:40.533 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:40.533 23:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:41.108 23:03:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:41.108 23:03:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:41.371 nvme0n1 00:33:41.371 23:03:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:41.371 23:03:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:41.371 Running I/O for 2 seconds... 00:33:43.899 00:33:43.899 Latency(us) 00:33:43.899 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:43.899 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:43.899 nvme0n1 : 2.01 11602.99 45.32 0.00 0.00 11012.06 4757.43 29515.47 00:33:43.899 =================================================================================================================== 00:33:43.899 Total : 11602.99 45.32 0.00 0.00 11012.06 4757.43 29515.47 00:33:43.899 0 00:33:43.899 23:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:43.899 23:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:43.899 23:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:43.899 23:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:43.899 | select(.opcode=="crc32c") 00:33:43.899 | "\(.module_name) \(.executed)"' 00:33:43.899 23:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:43.899 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:43.899 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:43.899 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:43.899 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:43.899 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3690017 00:33:43.899 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3690017 ']' 00:33:43.899 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3690017 00:33:43.899 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:43.899 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:43.899 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3690017 00:33:43.899 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:43.899 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:43.899 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3690017' 00:33:43.899 killing process with pid 3690017 00:33:43.899 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3690017 00:33:43.899 Received shutdown signal, test time was about 2.000000 seconds 00:33:43.899 00:33:43.899 Latency(us) 00:33:43.899 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:43.899 =================================================================================================================== 00:33:43.899 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:43.899 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3690017 00:33:43.899 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:33:43.899 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:43.899 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:43.899 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:43.899 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:43.899 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:43.899 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:43.899 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3690425 00:33:43.899 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3690425 /var/tmp/bperf.sock 00:33:43.899 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:43.899 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3690425 ']' 00:33:43.899 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:43.899 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:43.899 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:43.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:43.899 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:43.899 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:44.157 [2024-07-26 23:03:36.421612] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:44.157 [2024-07-26 23:03:36.421702] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3690425 ] 00:33:44.157 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:44.157 Zero copy mechanism will not be used. 00:33:44.157 EAL: No free 2048 kB hugepages reported on node 1 00:33:44.157 [2024-07-26 23:03:36.484534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:44.157 [2024-07-26 23:03:36.578286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:44.157 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:44.157 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:44.157 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:44.157 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:44.157 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:44.724 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:44.724 23:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:44.982 nvme0n1 00:33:44.982 23:03:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:44.982 23:03:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:45.240 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:45.240 Zero copy mechanism will not be used. 00:33:45.240 Running I/O for 2 seconds... 00:33:47.140 00:33:47.140 Latency(us) 00:33:47.140 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:47.140 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:47.140 nvme0n1 : 2.00 2682.26 335.28 0.00 0.00 5961.02 5679.79 12087.75 00:33:47.140 =================================================================================================================== 00:33:47.140 Total : 2682.26 335.28 0.00 0.00 5961.02 5679.79 12087.75 00:33:47.140 0 00:33:47.140 23:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:47.140 23:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:47.140 23:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:47.140 23:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:47.140 23:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:47.140 | select(.opcode=="crc32c") 00:33:47.140 | "\(.module_name) \(.executed)"' 00:33:47.398 23:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:47.398 23:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:47.398 23:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:47.398 23:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:47.398 23:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3690425 00:33:47.398 23:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3690425 ']' 00:33:47.398 23:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3690425 00:33:47.398 23:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:47.398 23:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:47.398 23:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3690425 00:33:47.398 23:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:47.398 23:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:47.398 23:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3690425' 00:33:47.398 killing process with pid 3690425 00:33:47.398 23:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3690425 00:33:47.399 Received shutdown signal, test time was about 2.000000 seconds 00:33:47.399 00:33:47.399 Latency(us) 00:33:47.399 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:47.399 =================================================================================================================== 00:33:47.399 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:47.399 23:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3690425 00:33:47.657 23:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:47.657 23:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:47.657 23:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:47.657 23:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:47.657 23:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:47.657 23:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:47.657 23:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:47.657 23:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3690884 00:33:47.657 23:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3690884 /var/tmp/bperf.sock 00:33:47.657 23:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:47.657 23:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3690884 ']' 00:33:47.657 23:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:47.657 23:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:47.657 23:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:47.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:47.657 23:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:47.657 23:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:47.657 [2024-07-26 23:03:40.136583] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:47.657 [2024-07-26 23:03:40.136670] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3690884 ] 00:33:47.916 EAL: No free 2048 kB hugepages reported on node 1 00:33:47.916 [2024-07-26 23:03:40.203934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:47.916 [2024-07-26 23:03:40.298199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:47.916 23:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:47.916 23:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:47.916 23:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:47.916 23:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:47.916 23:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:48.482 23:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:48.482 23:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:48.740 nvme0n1 00:33:48.740 23:03:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:48.740 23:03:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:48.999 Running I/O for 2 seconds... 00:33:50.898 00:33:50.898 Latency(us) 00:33:50.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:50.898 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:50.898 nvme0n1 : 2.01 18824.36 73.53 0.00 0.00 6784.62 6359.42 15340.28 00:33:50.898 =================================================================================================================== 00:33:50.898 Total : 18824.36 73.53 0.00 0.00 6784.62 6359.42 15340.28 00:33:50.898 0 00:33:50.898 23:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:50.898 23:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:50.898 23:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:50.898 23:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:50.898 23:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:50.898 | select(.opcode=="crc32c") 00:33:50.898 | "\(.module_name) \(.executed)"' 00:33:51.157 23:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:51.157 23:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:51.157 23:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:51.157 23:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:51.157 23:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3690884 00:33:51.157 23:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3690884 ']' 00:33:51.157 23:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3690884 00:33:51.157 23:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:51.157 23:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:51.157 23:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3690884 00:33:51.157 23:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:51.157 23:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:51.157 23:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3690884' 00:33:51.157 killing process with pid 3690884 00:33:51.157 23:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3690884 00:33:51.157 Received shutdown signal, test time was about 2.000000 seconds 00:33:51.157 00:33:51.157 Latency(us) 00:33:51.157 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:51.157 =================================================================================================================== 00:33:51.157 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:51.157 23:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3690884 00:33:51.415 23:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:51.415 23:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:51.415 23:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:51.415 23:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:51.415 23:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:51.415 23:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:51.415 23:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:51.415 23:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3691360 00:33:51.415 23:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:51.415 23:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3691360 /var/tmp/bperf.sock 00:33:51.415 23:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3691360 ']' 00:33:51.415 23:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:51.415 23:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:51.415 23:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:51.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:51.415 23:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:51.415 23:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:51.415 [2024-07-26 23:03:43.870484] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:51.415 [2024-07-26 23:03:43.870573] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3691360 ] 00:33:51.415 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:51.415 Zero copy mechanism will not be used. 00:33:51.415 EAL: No free 2048 kB hugepages reported on node 1 00:33:51.672 [2024-07-26 23:03:43.935614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:51.673 [2024-07-26 23:03:44.028380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:51.673 23:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:51.673 23:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:51.673 23:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:51.673 23:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:51.673 23:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:51.931 23:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:51.931 23:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:52.495 nvme0n1 00:33:52.495 23:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:52.495 23:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:52.495 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:52.495 Zero copy mechanism will not be used. 00:33:52.495 Running I/O for 2 seconds... 00:33:54.390 00:33:54.390 Latency(us) 00:33:54.390 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:54.390 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:54.390 nvme0n1 : 2.01 1842.22 230.28 0.00 0.00 8661.85 2864.17 11311.03 00:33:54.390 =================================================================================================================== 00:33:54.391 Total : 1842.22 230.28 0.00 0.00 8661.85 2864.17 11311.03 00:33:54.391 0 00:33:54.647 23:03:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:54.647 23:03:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:54.647 23:03:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:54.647 23:03:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:54.647 23:03:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:54.647 | select(.opcode=="crc32c") 00:33:54.647 | "\(.module_name) \(.executed)"' 00:33:54.903 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:54.903 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:54.903 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:54.903 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:54.903 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3691360 00:33:54.903 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3691360 ']' 00:33:54.903 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3691360 00:33:54.903 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:54.903 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:54.903 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3691360 00:33:54.903 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:54.903 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:54.903 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3691360' 00:33:54.903 killing process with pid 3691360 00:33:54.903 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3691360 00:33:54.903 Received shutdown signal, test time was about 2.000000 seconds 00:33:54.903 00:33:54.903 Latency(us) 00:33:54.903 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:54.903 =================================================================================================================== 00:33:54.903 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:54.903 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3691360 00:33:55.160 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3689991 00:33:55.160 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3689991 ']' 00:33:55.160 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3689991 00:33:55.160 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:55.160 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:55.160 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3689991 00:33:55.160 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:55.160 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:55.160 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3689991' 00:33:55.160 killing process with pid 3689991 00:33:55.160 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3689991 00:33:55.160 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3689991 00:33:55.418 00:33:55.418 real 0m15.462s 00:33:55.418 user 0m29.765s 00:33:55.418 sys 0m4.311s 00:33:55.418 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:55.418 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:55.418 ************************************ 00:33:55.418 END TEST nvmf_digest_clean 00:33:55.418 ************************************ 00:33:55.418 23:03:47 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:55.418 23:03:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:55.418 23:03:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:55.418 23:03:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:55.418 ************************************ 00:33:55.418 START TEST nvmf_digest_error 00:33:55.418 ************************************ 00:33:55.418 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:33:55.418 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:55.418 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:55.418 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:55.418 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:55.418 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3691789 00:33:55.418 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:55.418 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3691789 00:33:55.418 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3691789 ']' 00:33:55.418 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:55.418 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:55.418 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:55.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:55.418 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:55.418 23:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:55.418 [2024-07-26 23:03:47.796092] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:55.418 [2024-07-26 23:03:47.796164] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:55.418 EAL: No free 2048 kB hugepages reported on node 1 00:33:55.418 [2024-07-26 23:03:47.862606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:55.676 [2024-07-26 23:03:47.956583] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:55.676 [2024-07-26 23:03:47.956642] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:55.676 [2024-07-26 23:03:47.956658] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:55.676 [2024-07-26 23:03:47.956672] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:55.676 [2024-07-26 23:03:47.956684] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:55.676 [2024-07-26 23:03:47.956720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:55.676 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:55.676 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:55.676 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:55.676 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:55.676 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:55.676 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:55.676 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:55.676 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:55.676 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:55.676 [2024-07-26 23:03:48.057403] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:55.676 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:55.676 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:55.676 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:55.676 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:55.676 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:55.676 null0 00:33:55.676 [2024-07-26 23:03:48.168739] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:55.935 [2024-07-26 23:03:48.192972] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:55.935 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:55.935 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:55.935 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:55.935 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:55.935 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:55.935 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:55.935 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3691931 00:33:55.935 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3691931 /var/tmp/bperf.sock 00:33:55.935 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:55.935 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3691931 ']' 00:33:55.935 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:55.935 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:55.935 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:55.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:55.935 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:55.935 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:55.935 [2024-07-26 23:03:48.241852] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:55.935 [2024-07-26 23:03:48.241927] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3691931 ] 00:33:55.935 EAL: No free 2048 kB hugepages reported on node 1 00:33:55.935 [2024-07-26 23:03:48.307486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:55.935 [2024-07-26 23:03:48.399638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:56.193 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:56.193 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:56.193 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:56.193 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:56.450 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:56.450 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.450 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:56.450 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.450 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:56.450 23:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:56.709 nvme0n1 00:33:56.709 23:03:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:56.709 23:03:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.709 23:03:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:56.709 23:03:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.709 23:03:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:56.709 23:03:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:56.709 Running I/O for 2 seconds... 00:33:56.709 [2024-07-26 23:03:49.190272] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:56.709 [2024-07-26 23:03:49.190326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.709 [2024-07-26 23:03:49.190349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.709 [2024-07-26 23:03:49.207287] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:56.709 [2024-07-26 23:03:49.207324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.709 [2024-07-26 23:03:49.207345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.967 [2024-07-26 23:03:49.221306] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:56.967 [2024-07-26 23:03:49.221354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.967 [2024-07-26 23:03:49.221375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.967 [2024-07-26 23:03:49.234823] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:56.967 [2024-07-26 23:03:49.234860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.967 [2024-07-26 23:03:49.234880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.967 [2024-07-26 23:03:49.249200] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:56.967 [2024-07-26 23:03:49.249253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.967 [2024-07-26 23:03:49.249273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.967 [2024-07-26 23:03:49.263463] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:56.967 [2024-07-26 23:03:49.263517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.967 [2024-07-26 23:03:49.263538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.967 [2024-07-26 23:03:49.276100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:56.967 [2024-07-26 23:03:49.276156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.967 [2024-07-26 23:03:49.276178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.967 [2024-07-26 23:03:49.292110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:56.967 [2024-07-26 23:03:49.292152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.967 [2024-07-26 23:03:49.292173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.967 [2024-07-26 23:03:49.304188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:56.967 [2024-07-26 23:03:49.304228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.967 [2024-07-26 23:03:49.304252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.967 [2024-07-26 23:03:49.318508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:56.967 [2024-07-26 23:03:49.318542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.968 [2024-07-26 23:03:49.318561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.968 [2024-07-26 23:03:49.334307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:56.968 [2024-07-26 23:03:49.334353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.968 [2024-07-26 23:03:49.334375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.968 [2024-07-26 23:03:49.347011] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:56.968 [2024-07-26 23:03:49.347047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.968 [2024-07-26 23:03:49.347077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.968 [2024-07-26 23:03:49.363816] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:56.968 [2024-07-26 23:03:49.363852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.968 [2024-07-26 23:03:49.363872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.968 [2024-07-26 23:03:49.376321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:56.968 [2024-07-26 23:03:49.376358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.968 [2024-07-26 23:03:49.376378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.968 [2024-07-26 23:03:49.393183] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:56.968 [2024-07-26 23:03:49.393218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.968 [2024-07-26 23:03:49.393238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.968 [2024-07-26 23:03:49.411185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:56.968 [2024-07-26 23:03:49.411221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.968 [2024-07-26 23:03:49.411242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.968 [2024-07-26 23:03:49.423505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:56.968 [2024-07-26 23:03:49.423540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.968 [2024-07-26 23:03:49.423560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.968 [2024-07-26 23:03:49.439379] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:56.968 [2024-07-26 23:03:49.439415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.968 [2024-07-26 23:03:49.439435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.968 [2024-07-26 23:03:49.456501] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:56.968 [2024-07-26 23:03:49.456538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.968 [2024-07-26 23:03:49.456557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.968 [2024-07-26 23:03:49.468903] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:56.968 [2024-07-26 23:03:49.468938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.968 [2024-07-26 23:03:49.468957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.226 [2024-07-26 23:03:49.483975] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.226 [2024-07-26 23:03:49.484011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.226 [2024-07-26 23:03:49.484031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.226 [2024-07-26 23:03:49.496295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.226 [2024-07-26 23:03:49.496330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.226 [2024-07-26 23:03:49.496348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.226 [2024-07-26 23:03:49.511665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.226 [2024-07-26 23:03:49.511700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.226 [2024-07-26 23:03:49.511720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.226 [2024-07-26 23:03:49.523663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.226 [2024-07-26 23:03:49.523698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.226 [2024-07-26 23:03:49.523725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.226 [2024-07-26 23:03:49.537874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.226 [2024-07-26 23:03:49.537909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.226 [2024-07-26 23:03:49.537928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.226 [2024-07-26 23:03:49.550582] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.226 [2024-07-26 23:03:49.550617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.226 [2024-07-26 23:03:49.550635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.226 [2024-07-26 23:03:49.565495] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.226 [2024-07-26 23:03:49.565529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.226 [2024-07-26 23:03:49.565548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.226 [2024-07-26 23:03:49.577768] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.226 [2024-07-26 23:03:49.577804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.226 [2024-07-26 23:03:49.577823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.226 [2024-07-26 23:03:49.593006] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.226 [2024-07-26 23:03:49.593041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.226 [2024-07-26 23:03:49.593068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.226 [2024-07-26 23:03:49.605256] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.226 [2024-07-26 23:03:49.605291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.226 [2024-07-26 23:03:49.605310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.226 [2024-07-26 23:03:49.622692] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.226 [2024-07-26 23:03:49.622737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.226 [2024-07-26 23:03:49.622758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.226 [2024-07-26 23:03:49.633986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.226 [2024-07-26 23:03:49.634022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.226 [2024-07-26 23:03:49.634041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.226 [2024-07-26 23:03:49.649349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.226 [2024-07-26 23:03:49.649384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.226 [2024-07-26 23:03:49.649403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.226 [2024-07-26 23:03:49.666838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.226 [2024-07-26 23:03:49.666874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.226 [2024-07-26 23:03:49.666893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.227 [2024-07-26 23:03:49.683283] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.227 [2024-07-26 23:03:49.683318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.227 [2024-07-26 23:03:49.683337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.227 [2024-07-26 23:03:49.696535] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.227 [2024-07-26 23:03:49.696569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.227 [2024-07-26 23:03:49.696588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.227 [2024-07-26 23:03:49.712545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.227 [2024-07-26 23:03:49.712580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.227 [2024-07-26 23:03:49.712599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.227 [2024-07-26 23:03:49.725267] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.227 [2024-07-26 23:03:49.725309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.227 [2024-07-26 23:03:49.725332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.485 [2024-07-26 23:03:49.739094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.485 [2024-07-26 23:03:49.739128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.485 [2024-07-26 23:03:49.739147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.485 [2024-07-26 23:03:49.755560] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.485 [2024-07-26 23:03:49.755595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.485 [2024-07-26 23:03:49.755615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.485 [2024-07-26 23:03:49.767266] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.485 [2024-07-26 23:03:49.767301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.485 [2024-07-26 23:03:49.767326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.485 [2024-07-26 23:03:49.782901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.485 [2024-07-26 23:03:49.782935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.485 [2024-07-26 23:03:49.782955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.485 [2024-07-26 23:03:49.799017] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.485 [2024-07-26 23:03:49.799070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.485 [2024-07-26 23:03:49.799095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.485 [2024-07-26 23:03:49.811235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.485 [2024-07-26 23:03:49.811270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.485 [2024-07-26 23:03:49.811290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.485 [2024-07-26 23:03:49.828464] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.485 [2024-07-26 23:03:49.828499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.485 [2024-07-26 23:03:49.828518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.485 [2024-07-26 23:03:49.840727] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.485 [2024-07-26 23:03:49.840761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.485 [2024-07-26 23:03:49.840780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.485 [2024-07-26 23:03:49.854367] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.485 [2024-07-26 23:03:49.854402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.485 [2024-07-26 23:03:49.854421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.485 [2024-07-26 23:03:49.868273] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.485 [2024-07-26 23:03:49.868310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.485 [2024-07-26 23:03:49.868344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.485 [2024-07-26 23:03:49.884260] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.485 [2024-07-26 23:03:49.884296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.485 [2024-07-26 23:03:49.884315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.485 [2024-07-26 23:03:49.895569] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.485 [2024-07-26 23:03:49.895613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.485 [2024-07-26 23:03:49.895634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.485 [2024-07-26 23:03:49.910639] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.485 [2024-07-26 23:03:49.910674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.485 [2024-07-26 23:03:49.910693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.485 [2024-07-26 23:03:49.924896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.485 [2024-07-26 23:03:49.924930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.485 [2024-07-26 23:03:49.924949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.485 [2024-07-26 23:03:49.937258] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.485 [2024-07-26 23:03:49.937293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.485 [2024-07-26 23:03:49.937312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.485 [2024-07-26 23:03:49.951822] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.485 [2024-07-26 23:03:49.951857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.485 [2024-07-26 23:03:49.951883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.485 [2024-07-26 23:03:49.966978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.485 [2024-07-26 23:03:49.967029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.486 [2024-07-26 23:03:49.967052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.486 [2024-07-26 23:03:49.980254] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.486 [2024-07-26 23:03:49.980297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.486 [2024-07-26 23:03:49.980317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.772 [2024-07-26 23:03:49.994756] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.772 [2024-07-26 23:03:49.994795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.772 [2024-07-26 23:03:49.994815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.772 [2024-07-26 23:03:50.010287] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.772 [2024-07-26 23:03:50.010345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.772 [2024-07-26 23:03:50.010376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.772 [2024-07-26 23:03:50.029661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.772 [2024-07-26 23:03:50.029696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.772 [2024-07-26 23:03:50.029716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.772 [2024-07-26 23:03:50.048741] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.772 [2024-07-26 23:03:50.048778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.772 [2024-07-26 23:03:50.048798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.772 [2024-07-26 23:03:50.066460] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.772 [2024-07-26 23:03:50.066496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.772 [2024-07-26 23:03:50.066515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.772 [2024-07-26 23:03:50.080179] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.772 [2024-07-26 23:03:50.080214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.772 [2024-07-26 23:03:50.080246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.772 [2024-07-26 23:03:50.092773] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.772 [2024-07-26 23:03:50.092808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.773 [2024-07-26 23:03:50.092828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.773 [2024-07-26 23:03:50.106771] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.773 [2024-07-26 23:03:50.106805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.773 [2024-07-26 23:03:50.106825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.773 [2024-07-26 23:03:50.119340] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.773 [2024-07-26 23:03:50.119375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.773 [2024-07-26 23:03:50.119394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.773 [2024-07-26 23:03:50.134561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.773 [2024-07-26 23:03:50.134596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.773 [2024-07-26 23:03:50.134615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.773 [2024-07-26 23:03:50.148886] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.773 [2024-07-26 23:03:50.148920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.773 [2024-07-26 23:03:50.148963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.773 [2024-07-26 23:03:50.161585] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.773 [2024-07-26 23:03:50.161628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.773 [2024-07-26 23:03:50.161647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.773 [2024-07-26 23:03:50.178106] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.773 [2024-07-26 23:03:50.178141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.773 [2024-07-26 23:03:50.178175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.773 [2024-07-26 23:03:50.193953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.773 [2024-07-26 23:03:50.194004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.773 [2024-07-26 23:03:50.194027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.773 [2024-07-26 23:03:50.207551] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.773 [2024-07-26 23:03:50.207587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.773 [2024-07-26 23:03:50.207606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.773 [2024-07-26 23:03:50.223887] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.773 [2024-07-26 23:03:50.223923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.773 [2024-07-26 23:03:50.223943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.773 [2024-07-26 23:03:50.237860] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.773 [2024-07-26 23:03:50.237912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.773 [2024-07-26 23:03:50.237935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.773 [2024-07-26 23:03:50.251496] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:57.773 [2024-07-26 23:03:50.251531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.773 [2024-07-26 23:03:50.251550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.031 [2024-07-26 23:03:50.267069] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.031 [2024-07-26 23:03:50.267104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.031 [2024-07-26 23:03:50.267123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.031 [2024-07-26 23:03:50.282951] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.031 [2024-07-26 23:03:50.282987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.031 [2024-07-26 23:03:50.283007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.031 [2024-07-26 23:03:50.295153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.031 [2024-07-26 23:03:50.295189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.031 [2024-07-26 23:03:50.295208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.031 [2024-07-26 23:03:50.313295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.031 [2024-07-26 23:03:50.313332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.031 [2024-07-26 23:03:50.313351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.031 [2024-07-26 23:03:50.325366] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.031 [2024-07-26 23:03:50.325401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.031 [2024-07-26 23:03:50.325431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.031 [2024-07-26 23:03:50.341175] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.031 [2024-07-26 23:03:50.341211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.031 [2024-07-26 23:03:50.341230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.031 [2024-07-26 23:03:50.353888] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.031 [2024-07-26 23:03:50.353924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.031 [2024-07-26 23:03:50.353943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.031 [2024-07-26 23:03:50.369562] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.031 [2024-07-26 23:03:50.369598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.031 [2024-07-26 23:03:50.369617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.031 [2024-07-26 23:03:50.387347] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.031 [2024-07-26 23:03:50.387386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.031 [2024-07-26 23:03:50.387405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.031 [2024-07-26 23:03:50.399464] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.031 [2024-07-26 23:03:50.399499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.031 [2024-07-26 23:03:50.399524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.031 [2024-07-26 23:03:50.414458] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.031 [2024-07-26 23:03:50.414493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.031 [2024-07-26 23:03:50.414512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.031 [2024-07-26 23:03:50.427737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.031 [2024-07-26 23:03:50.427782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.031 [2024-07-26 23:03:50.427802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.031 [2024-07-26 23:03:50.440182] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.031 [2024-07-26 23:03:50.440217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.031 [2024-07-26 23:03:50.440235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.031 [2024-07-26 23:03:50.454870] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.031 [2024-07-26 23:03:50.454905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.032 [2024-07-26 23:03:50.454946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.032 [2024-07-26 23:03:50.467598] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.032 [2024-07-26 23:03:50.467633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.032 [2024-07-26 23:03:50.467652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.032 [2024-07-26 23:03:50.483641] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.032 [2024-07-26 23:03:50.483677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.032 [2024-07-26 23:03:50.483706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.032 [2024-07-26 23:03:50.496478] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.032 [2024-07-26 23:03:50.496513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.032 [2024-07-26 23:03:50.496533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.032 [2024-07-26 23:03:50.513758] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.032 [2024-07-26 23:03:50.513794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.032 [2024-07-26 23:03:50.513813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.032 [2024-07-26 23:03:50.527266] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.032 [2024-07-26 23:03:50.527311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.032 [2024-07-26 23:03:50.527343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.290 [2024-07-26 23:03:50.542660] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.290 [2024-07-26 23:03:50.542695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.290 [2024-07-26 23:03:50.542714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.290 [2024-07-26 23:03:50.556034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.290 [2024-07-26 23:03:50.556091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.290 [2024-07-26 23:03:50.556122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.290 [2024-07-26 23:03:50.571672] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.290 [2024-07-26 23:03:50.571707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.290 [2024-07-26 23:03:50.571728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.290 [2024-07-26 23:03:50.584408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.290 [2024-07-26 23:03:50.584444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.290 [2024-07-26 23:03:50.584463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.290 [2024-07-26 23:03:50.600235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.290 [2024-07-26 23:03:50.600271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.290 [2024-07-26 23:03:50.600291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.290 [2024-07-26 23:03:50.611800] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.290 [2024-07-26 23:03:50.611835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.290 [2024-07-26 23:03:50.611864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.290 [2024-07-26 23:03:50.625876] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.290 [2024-07-26 23:03:50.625912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.290 [2024-07-26 23:03:50.625931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.290 [2024-07-26 23:03:50.640608] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.290 [2024-07-26 23:03:50.640648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.290 [2024-07-26 23:03:50.640667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.290 [2024-07-26 23:03:50.653340] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.290 [2024-07-26 23:03:50.653376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.290 [2024-07-26 23:03:50.653395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.290 [2024-07-26 23:03:50.668570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.290 [2024-07-26 23:03:50.668605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.290 [2024-07-26 23:03:50.668625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.290 [2024-07-26 23:03:50.681192] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.290 [2024-07-26 23:03:50.681227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.290 [2024-07-26 23:03:50.681247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.290 [2024-07-26 23:03:50.695935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.290 [2024-07-26 23:03:50.695970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.290 [2024-07-26 23:03:50.696000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.290 [2024-07-26 23:03:50.708554] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.290 [2024-07-26 23:03:50.708589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.290 [2024-07-26 23:03:50.708609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.290 [2024-07-26 23:03:50.723362] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.290 [2024-07-26 23:03:50.723398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.290 [2024-07-26 23:03:50.723428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.290 [2024-07-26 23:03:50.735390] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.290 [2024-07-26 23:03:50.735425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.290 [2024-07-26 23:03:50.735445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.290 [2024-07-26 23:03:50.751705] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.290 [2024-07-26 23:03:50.751741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.290 [2024-07-26 23:03:50.751764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.290 [2024-07-26 23:03:50.763711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.290 [2024-07-26 23:03:50.763746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.290 [2024-07-26 23:03:50.763774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.290 [2024-07-26 23:03:50.779866] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.290 [2024-07-26 23:03:50.779902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.290 [2024-07-26 23:03:50.779921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.548 [2024-07-26 23:03:50.794716] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.549 [2024-07-26 23:03:50.794752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.549 [2024-07-26 23:03:50.794771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.549 [2024-07-26 23:03:50.807586] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.549 [2024-07-26 23:03:50.807622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.549 [2024-07-26 23:03:50.807641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.549 [2024-07-26 23:03:50.824617] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.549 [2024-07-26 23:03:50.824654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.549 [2024-07-26 23:03:50.824673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.549 [2024-07-26 23:03:50.838173] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.549 [2024-07-26 23:03:50.838209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.549 [2024-07-26 23:03:50.838228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.549 [2024-07-26 23:03:50.850628] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.549 [2024-07-26 23:03:50.850663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.549 [2024-07-26 23:03:50.850687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.549 [2024-07-26 23:03:50.865290] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.549 [2024-07-26 23:03:50.865325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.549 [2024-07-26 23:03:50.865370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.549 [2024-07-26 23:03:50.877614] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.549 [2024-07-26 23:03:50.877656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.549 [2024-07-26 23:03:50.877681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.549 [2024-07-26 23:03:50.894105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.549 [2024-07-26 23:03:50.894176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.549 [2024-07-26 23:03:50.894198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.549 [2024-07-26 23:03:50.905671] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.549 [2024-07-26 23:03:50.905716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.549 [2024-07-26 23:03:50.905738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.549 [2024-07-26 23:03:50.920489] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.549 [2024-07-26 23:03:50.920534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.549 [2024-07-26 23:03:50.920556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.549 [2024-07-26 23:03:50.933760] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.549 [2024-07-26 23:03:50.933795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.549 [2024-07-26 23:03:50.933815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.549 [2024-07-26 23:03:50.947066] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.549 [2024-07-26 23:03:50.947101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.549 [2024-07-26 23:03:50.947141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.549 [2024-07-26 23:03:50.963654] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.549 [2024-07-26 23:03:50.963706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.549 [2024-07-26 23:03:50.963728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.549 [2024-07-26 23:03:50.975820] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.549 [2024-07-26 23:03:50.975856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.549 [2024-07-26 23:03:50.975875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.549 [2024-07-26 23:03:50.991314] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.549 [2024-07-26 23:03:50.991350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.549 [2024-07-26 23:03:50.991370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.549 [2024-07-26 23:03:51.006547] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.549 [2024-07-26 23:03:51.006584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.549 [2024-07-26 23:03:51.006603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.549 [2024-07-26 23:03:51.019179] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.549 [2024-07-26 23:03:51.019215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.549 [2024-07-26 23:03:51.019234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.549 [2024-07-26 23:03:51.035018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.549 [2024-07-26 23:03:51.035054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.549 [2024-07-26 23:03:51.035083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.549 [2024-07-26 23:03:51.047837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.549 [2024-07-26 23:03:51.047874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.549 [2024-07-26 23:03:51.047894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.807 [2024-07-26 23:03:51.062297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.807 [2024-07-26 23:03:51.062333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.807 [2024-07-26 23:03:51.062353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.807 [2024-07-26 23:03:51.074452] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.807 [2024-07-26 23:03:51.074487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.807 [2024-07-26 23:03:51.074507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.807 [2024-07-26 23:03:51.090172] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.807 [2024-07-26 23:03:51.090209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.807 [2024-07-26 23:03:51.090228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.807 [2024-07-26 23:03:51.102267] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.807 [2024-07-26 23:03:51.102308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.807 [2024-07-26 23:03:51.102328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.807 [2024-07-26 23:03:51.117895] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.807 [2024-07-26 23:03:51.117931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.807 [2024-07-26 23:03:51.117950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.808 [2024-07-26 23:03:51.130024] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.808 [2024-07-26 23:03:51.130076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.808 [2024-07-26 23:03:51.130099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.808 [2024-07-26 23:03:51.146268] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.808 [2024-07-26 23:03:51.146304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.808 [2024-07-26 23:03:51.146323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.808 [2024-07-26 23:03:51.160428] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.808 [2024-07-26 23:03:51.160463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.808 [2024-07-26 23:03:51.160484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.808 [2024-07-26 23:03:51.172054] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e68d0) 00:33:58.808 [2024-07-26 23:03:51.172102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.808 [2024-07-26 23:03:51.172122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.808 00:33:58.808 Latency(us) 00:33:58.808 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:58.808 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:58.808 nvme0n1 : 2.00 17731.97 69.27 0.00 0.00 7208.18 3786.52 22330.79 00:33:58.808 =================================================================================================================== 00:33:58.808 Total : 17731.97 69.27 0.00 0.00 7208.18 3786.52 22330.79 00:33:58.808 0 00:33:58.808 23:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:58.808 23:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:58.808 23:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:58.808 23:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:58.808 | .driver_specific 00:33:58.808 | .nvme_error 00:33:58.808 | .status_code 00:33:58.808 | .command_transient_transport_error' 00:33:59.066 23:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 139 > 0 )) 00:33:59.066 23:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3691931 00:33:59.066 23:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3691931 ']' 00:33:59.066 23:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3691931 00:33:59.066 23:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:59.066 23:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:59.066 23:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3691931 00:33:59.066 23:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:59.066 23:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:59.066 23:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3691931' 00:33:59.066 killing process with pid 3691931 00:33:59.066 23:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3691931 00:33:59.066 Received shutdown signal, test time was about 2.000000 seconds 00:33:59.066 00:33:59.066 Latency(us) 00:33:59.066 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:59.066 =================================================================================================================== 00:33:59.066 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:59.066 23:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3691931 00:33:59.325 23:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:59.325 23:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:59.325 23:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:59.325 23:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:59.325 23:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:59.325 23:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3692338 00:33:59.325 23:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:59.325 23:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3692338 /var/tmp/bperf.sock 00:33:59.325 23:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3692338 ']' 00:33:59.325 23:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:59.325 23:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:59.325 23:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:59.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:59.325 23:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:59.325 23:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:59.325 [2024-07-26 23:03:51.749040] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:59.325 [2024-07-26 23:03:51.749120] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3692338 ] 00:33:59.325 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:59.325 Zero copy mechanism will not be used. 00:33:59.325 EAL: No free 2048 kB hugepages reported on node 1 00:33:59.325 [2024-07-26 23:03:51.811038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:59.583 [2024-07-26 23:03:51.906022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:59.583 23:03:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:59.583 23:03:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:59.583 23:03:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:59.583 23:03:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:59.841 23:03:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:59.841 23:03:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.841 23:03:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:59.841 23:03:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.841 23:03:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:59.841 23:03:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:00.100 nvme0n1 00:34:00.100 23:03:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:00.100 23:03:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.100 23:03:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:00.100 23:03:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.100 23:03:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:00.100 23:03:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:00.358 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:00.358 Zero copy mechanism will not be used. 00:34:00.358 Running I/O for 2 seconds... 00:34:00.358 [2024-07-26 23:03:52.716131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.358 [2024-07-26 23:03:52.716193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.358 [2024-07-26 23:03:52.716214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:00.358 [2024-07-26 23:03:52.728301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.358 [2024-07-26 23:03:52.728332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.358 [2024-07-26 23:03:52.728377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:00.358 [2024-07-26 23:03:52.740252] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.358 [2024-07-26 23:03:52.740282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.358 [2024-07-26 23:03:52.740304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:00.358 [2024-07-26 23:03:52.752416] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.358 [2024-07-26 23:03:52.752449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.358 [2024-07-26 23:03:52.752476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.358 [2024-07-26 23:03:52.764470] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.358 [2024-07-26 23:03:52.764502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.358 [2024-07-26 23:03:52.764526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:00.358 [2024-07-26 23:03:52.776524] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.358 [2024-07-26 23:03:52.776557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.358 [2024-07-26 23:03:52.776581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:00.358 [2024-07-26 23:03:52.788458] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.359 [2024-07-26 23:03:52.788492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.359 [2024-07-26 23:03:52.788512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:00.359 [2024-07-26 23:03:52.800497] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.359 [2024-07-26 23:03:52.800530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.359 [2024-07-26 23:03:52.800561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.359 [2024-07-26 23:03:52.812640] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.359 [2024-07-26 23:03:52.812674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.359 [2024-07-26 23:03:52.812693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:00.359 [2024-07-26 23:03:52.824562] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.359 [2024-07-26 23:03:52.824597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.359 [2024-07-26 23:03:52.824618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:00.359 [2024-07-26 23:03:52.836499] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.359 [2024-07-26 23:03:52.836536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.359 [2024-07-26 23:03:52.836557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:00.359 [2024-07-26 23:03:52.848534] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.359 [2024-07-26 23:03:52.848569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.359 [2024-07-26 23:03:52.848589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.359 [2024-07-26 23:03:52.860600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.359 [2024-07-26 23:03:52.860635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.359 [2024-07-26 23:03:52.860655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:00.617 [2024-07-26 23:03:52.872629] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.617 [2024-07-26 23:03:52.872661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.617 [2024-07-26 23:03:52.872680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:00.617 [2024-07-26 23:03:52.884528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.617 [2024-07-26 23:03:52.884562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.618 [2024-07-26 23:03:52.884587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:00.618 [2024-07-26 23:03:52.896685] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.618 [2024-07-26 23:03:52.896720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.618 [2024-07-26 23:03:52.896740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.618 [2024-07-26 23:03:52.908567] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.618 [2024-07-26 23:03:52.908604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.618 [2024-07-26 23:03:52.908623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:00.618 [2024-07-26 23:03:52.920582] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.618 [2024-07-26 23:03:52.920618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.618 [2024-07-26 23:03:52.920637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:00.618 [2024-07-26 23:03:52.932550] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.618 [2024-07-26 23:03:52.932584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.618 [2024-07-26 23:03:52.932604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:00.618 [2024-07-26 23:03:52.944576] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.618 [2024-07-26 23:03:52.944611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.618 [2024-07-26 23:03:52.944630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.618 [2024-07-26 23:03:52.956486] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.618 [2024-07-26 23:03:52.956519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.618 [2024-07-26 23:03:52.956539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:00.618 [2024-07-26 23:03:52.968522] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.618 [2024-07-26 23:03:52.968555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.618 [2024-07-26 23:03:52.968574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:00.618 [2024-07-26 23:03:52.980549] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.618 [2024-07-26 23:03:52.980584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.618 [2024-07-26 23:03:52.980604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:00.618 [2024-07-26 23:03:52.992473] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.618 [2024-07-26 23:03:52.992507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.618 [2024-07-26 23:03:52.992526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.618 [2024-07-26 23:03:53.004439] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.618 [2024-07-26 23:03:53.004472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.618 [2024-07-26 23:03:53.004492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:00.618 [2024-07-26 23:03:53.016461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.618 [2024-07-26 23:03:53.016495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.618 [2024-07-26 23:03:53.016514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:00.618 [2024-07-26 23:03:53.028355] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.618 [2024-07-26 23:03:53.028402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.618 [2024-07-26 23:03:53.028421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:00.618 [2024-07-26 23:03:53.040373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.618 [2024-07-26 23:03:53.040420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.618 [2024-07-26 23:03:53.040439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.618 [2024-07-26 23:03:53.052492] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.618 [2024-07-26 23:03:53.052526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.618 [2024-07-26 23:03:53.052544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:00.618 [2024-07-26 23:03:53.064461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.618 [2024-07-26 23:03:53.064494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.618 [2024-07-26 23:03:53.064513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:00.618 [2024-07-26 23:03:53.076478] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.618 [2024-07-26 23:03:53.076512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.618 [2024-07-26 23:03:53.076531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:00.618 [2024-07-26 23:03:53.088470] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.618 [2024-07-26 23:03:53.088504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.618 [2024-07-26 23:03:53.088529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.618 [2024-07-26 23:03:53.100501] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.618 [2024-07-26 23:03:53.100535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.618 [2024-07-26 23:03:53.100554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:00.618 [2024-07-26 23:03:53.112444] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.618 [2024-07-26 23:03:53.112476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.618 [2024-07-26 23:03:53.112495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:00.879 [2024-07-26 23:03:53.124343] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.879 [2024-07-26 23:03:53.124389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.879 [2024-07-26 23:03:53.124408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:00.879 [2024-07-26 23:03:53.136438] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.879 [2024-07-26 23:03:53.136471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.879 [2024-07-26 23:03:53.136491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.879 [2024-07-26 23:03:53.148413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.879 [2024-07-26 23:03:53.148457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.879 [2024-07-26 23:03:53.148473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:00.879 [2024-07-26 23:03:53.160367] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.879 [2024-07-26 23:03:53.160415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.879 [2024-07-26 23:03:53.160434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:00.879 [2024-07-26 23:03:53.172338] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.879 [2024-07-26 23:03:53.172385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.879 [2024-07-26 23:03:53.172403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:00.879 [2024-07-26 23:03:53.183853] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.879 [2024-07-26 23:03:53.183886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.879 [2024-07-26 23:03:53.183904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.879 [2024-07-26 23:03:53.195789] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.879 [2024-07-26 23:03:53.195828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.879 [2024-07-26 23:03:53.195848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:00.879 [2024-07-26 23:03:53.207709] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.879 [2024-07-26 23:03:53.207742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.879 [2024-07-26 23:03:53.207761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:00.879 [2024-07-26 23:03:53.219665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.879 [2024-07-26 23:03:53.219698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.879 [2024-07-26 23:03:53.219718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:00.879 [2024-07-26 23:03:53.231606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.879 [2024-07-26 23:03:53.231641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.879 [2024-07-26 23:03:53.231661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.879 [2024-07-26 23:03:53.243431] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.879 [2024-07-26 23:03:53.243466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.879 [2024-07-26 23:03:53.243486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:00.879 [2024-07-26 23:03:53.255356] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.879 [2024-07-26 23:03:53.255402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.879 [2024-07-26 23:03:53.255422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:00.879 [2024-07-26 23:03:53.267356] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.879 [2024-07-26 23:03:53.267402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.879 [2024-07-26 23:03:53.267420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:00.879 [2024-07-26 23:03:53.279377] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.879 [2024-07-26 23:03:53.279423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.879 [2024-07-26 23:03:53.279442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.879 [2024-07-26 23:03:53.291334] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.879 [2024-07-26 23:03:53.291362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.879 [2024-07-26 23:03:53.291379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:00.879 [2024-07-26 23:03:53.303342] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.880 [2024-07-26 23:03:53.303386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.880 [2024-07-26 23:03:53.303403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:00.880 [2024-07-26 23:03:53.315480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.880 [2024-07-26 23:03:53.315513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.880 [2024-07-26 23:03:53.315532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:00.880 [2024-07-26 23:03:53.327322] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.880 [2024-07-26 23:03:53.327351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.880 [2024-07-26 23:03:53.327368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.880 [2024-07-26 23:03:53.339235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.880 [2024-07-26 23:03:53.339264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.880 [2024-07-26 23:03:53.339281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:00.880 [2024-07-26 23:03:53.351213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.880 [2024-07-26 23:03:53.351242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.880 [2024-07-26 23:03:53.351259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:00.880 [2024-07-26 23:03:53.363479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.880 [2024-07-26 23:03:53.363511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.880 [2024-07-26 23:03:53.363530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:00.880 [2024-07-26 23:03:53.375511] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:00.880 [2024-07-26 23:03:53.375545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.880 [2024-07-26 23:03:53.375564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.139 [2024-07-26 23:03:53.387623] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.139 [2024-07-26 23:03:53.387657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.139 [2024-07-26 23:03:53.387676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.139 [2024-07-26 23:03:53.399601] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.139 [2024-07-26 23:03:53.399634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.139 [2024-07-26 23:03:53.399662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.139 [2024-07-26 23:03:53.411630] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.139 [2024-07-26 23:03:53.411663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.139 [2024-07-26 23:03:53.411681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.139 [2024-07-26 23:03:53.424047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.139 [2024-07-26 23:03:53.424090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.139 [2024-07-26 23:03:53.424123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.139 [2024-07-26 23:03:53.435933] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.139 [2024-07-26 23:03:53.435966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.139 [2024-07-26 23:03:53.435985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.139 [2024-07-26 23:03:53.447851] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.139 [2024-07-26 23:03:53.447884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.139 [2024-07-26 23:03:53.447903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.139 [2024-07-26 23:03:53.459792] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.139 [2024-07-26 23:03:53.459825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.139 [2024-07-26 23:03:53.459845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.139 [2024-07-26 23:03:53.471719] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.139 [2024-07-26 23:03:53.471751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.139 [2024-07-26 23:03:53.471770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.139 [2024-07-26 23:03:53.483673] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.139 [2024-07-26 23:03:53.483707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.139 [2024-07-26 23:03:53.483727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.139 [2024-07-26 23:03:53.495566] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.139 [2024-07-26 23:03:53.495600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.139 [2024-07-26 23:03:53.495619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.139 [2024-07-26 23:03:53.507354] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.139 [2024-07-26 23:03:53.507400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.139 [2024-07-26 23:03:53.507420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.139 [2024-07-26 23:03:53.519174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.139 [2024-07-26 23:03:53.519204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.139 [2024-07-26 23:03:53.519220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.139 [2024-07-26 23:03:53.531314] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.139 [2024-07-26 23:03:53.531359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.139 [2024-07-26 23:03:53.531379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.139 [2024-07-26 23:03:53.543376] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.139 [2024-07-26 23:03:53.543424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.139 [2024-07-26 23:03:53.543443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.139 [2024-07-26 23:03:53.555368] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.139 [2024-07-26 23:03:53.555416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.139 [2024-07-26 23:03:53.555434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.139 [2024-07-26 23:03:53.567422] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.139 [2024-07-26 23:03:53.567455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.139 [2024-07-26 23:03:53.567474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.139 [2024-07-26 23:03:53.579353] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.139 [2024-07-26 23:03:53.579399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.139 [2024-07-26 23:03:53.579418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.139 [2024-07-26 23:03:53.591382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.139 [2024-07-26 23:03:53.591426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.139 [2024-07-26 23:03:53.591446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.139 [2024-07-26 23:03:53.603334] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.139 [2024-07-26 23:03:53.603379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.139 [2024-07-26 23:03:53.603405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.139 [2024-07-26 23:03:53.615309] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.139 [2024-07-26 23:03:53.615338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.139 [2024-07-26 23:03:53.615355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.139 [2024-07-26 23:03:53.627754] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.139 [2024-07-26 23:03:53.627786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.139 [2024-07-26 23:03:53.627805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.139 [2024-07-26 23:03:53.639633] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.139 [2024-07-26 23:03:53.639666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.139 [2024-07-26 23:03:53.639685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.397 [2024-07-26 23:03:53.651586] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.397 [2024-07-26 23:03:53.651618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.397 [2024-07-26 23:03:53.651637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.397 [2024-07-26 23:03:53.663782] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.397 [2024-07-26 23:03:53.663815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.397 [2024-07-26 23:03:53.663834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.397 [2024-07-26 23:03:53.675841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.397 [2024-07-26 23:03:53.675874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.397 [2024-07-26 23:03:53.675893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.398 [2024-07-26 23:03:53.688031] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.398 [2024-07-26 23:03:53.688072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.398 [2024-07-26 23:03:53.688108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.398 [2024-07-26 23:03:53.699997] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.398 [2024-07-26 23:03:53.700030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.398 [2024-07-26 23:03:53.700049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.398 [2024-07-26 23:03:53.712273] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.398 [2024-07-26 23:03:53.712309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.398 [2024-07-26 23:03:53.712326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.398 [2024-07-26 23:03:53.724396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.398 [2024-07-26 23:03:53.724444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.398 [2024-07-26 23:03:53.724463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.398 [2024-07-26 23:03:53.736359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.398 [2024-07-26 23:03:53.736404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.398 [2024-07-26 23:03:53.736424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.398 [2024-07-26 23:03:53.748395] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.398 [2024-07-26 23:03:53.748441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.398 [2024-07-26 23:03:53.748461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.398 [2024-07-26 23:03:53.760405] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.398 [2024-07-26 23:03:53.760451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.398 [2024-07-26 23:03:53.760470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.398 [2024-07-26 23:03:53.772349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.398 [2024-07-26 23:03:53.772395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.398 [2024-07-26 23:03:53.772414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.398 [2024-07-26 23:03:53.784345] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.398 [2024-07-26 23:03:53.784374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.398 [2024-07-26 23:03:53.784405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.398 [2024-07-26 23:03:53.796291] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.398 [2024-07-26 23:03:53.796320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.398 [2024-07-26 23:03:53.796337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.398 [2024-07-26 23:03:53.808366] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.398 [2024-07-26 23:03:53.808410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.398 [2024-07-26 23:03:53.808426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.398 [2024-07-26 23:03:53.820323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.398 [2024-07-26 23:03:53.820353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.398 [2024-07-26 23:03:53.820369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.398 [2024-07-26 23:03:53.832260] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.398 [2024-07-26 23:03:53.832289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.398 [2024-07-26 23:03:53.832305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.398 [2024-07-26 23:03:53.844117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.398 [2024-07-26 23:03:53.844146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.398 [2024-07-26 23:03:53.844162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.398 [2024-07-26 23:03:53.855921] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.398 [2024-07-26 23:03:53.855953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.398 [2024-07-26 23:03:53.855971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.398 [2024-07-26 23:03:53.867754] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.398 [2024-07-26 23:03:53.867786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.398 [2024-07-26 23:03:53.867805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.398 [2024-07-26 23:03:53.879715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.398 [2024-07-26 23:03:53.879747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.398 [2024-07-26 23:03:53.879766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.398 [2024-07-26 23:03:53.891479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.398 [2024-07-26 23:03:53.891511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.398 [2024-07-26 23:03:53.891530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.656 [2024-07-26 23:03:53.903359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.656 [2024-07-26 23:03:53.903405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.656 [2024-07-26 23:03:53.903424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.656 [2024-07-26 23:03:53.915117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.656 [2024-07-26 23:03:53.915153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.656 [2024-07-26 23:03:53.915172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.656 [2024-07-26 23:03:53.927195] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.656 [2024-07-26 23:03:53.927225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.656 [2024-07-26 23:03:53.927242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.656 [2024-07-26 23:03:53.939239] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.656 [2024-07-26 23:03:53.939267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.656 [2024-07-26 23:03:53.939284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.656 [2024-07-26 23:03:53.951215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.656 [2024-07-26 23:03:53.951246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.656 [2024-07-26 23:03:53.951263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.656 [2024-07-26 23:03:53.963051] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.656 [2024-07-26 23:03:53.963093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.656 [2024-07-26 23:03:53.963138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.656 [2024-07-26 23:03:53.975115] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.656 [2024-07-26 23:03:53.975144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.656 [2024-07-26 23:03:53.975164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.656 [2024-07-26 23:03:53.987134] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.656 [2024-07-26 23:03:53.987164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.656 [2024-07-26 23:03:53.987183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.656 [2024-07-26 23:03:53.999145] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.656 [2024-07-26 23:03:53.999175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.656 [2024-07-26 23:03:53.999197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.656 [2024-07-26 23:03:54.011029] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.656 [2024-07-26 23:03:54.011073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.656 [2024-07-26 23:03:54.011113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.656 [2024-07-26 23:03:54.022943] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.656 [2024-07-26 23:03:54.022978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.656 [2024-07-26 23:03:54.022997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.656 [2024-07-26 23:03:54.034773] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.656 [2024-07-26 23:03:54.034807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.656 [2024-07-26 23:03:54.034828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.656 [2024-07-26 23:03:54.046940] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.656 [2024-07-26 23:03:54.046973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.656 [2024-07-26 23:03:54.046994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.656 [2024-07-26 23:03:54.058911] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.656 [2024-07-26 23:03:54.058944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.656 [2024-07-26 23:03:54.058963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.656 [2024-07-26 23:03:54.070877] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.656 [2024-07-26 23:03:54.070912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.656 [2024-07-26 23:03:54.070931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.657 [2024-07-26 23:03:54.082862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.657 [2024-07-26 23:03:54.082897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.657 [2024-07-26 23:03:54.082916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.657 [2024-07-26 23:03:54.095052] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.657 [2024-07-26 23:03:54.095108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.657 [2024-07-26 23:03:54.095131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.657 [2024-07-26 23:03:54.107080] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.657 [2024-07-26 23:03:54.107126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.657 [2024-07-26 23:03:54.107144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.657 [2024-07-26 23:03:54.119179] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.657 [2024-07-26 23:03:54.119207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.657 [2024-07-26 23:03:54.119235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.657 [2024-07-26 23:03:54.131300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.657 [2024-07-26 23:03:54.131343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.657 [2024-07-26 23:03:54.131364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.657 [2024-07-26 23:03:54.143448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.657 [2024-07-26 23:03:54.143482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.657 [2024-07-26 23:03:54.143500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.657 [2024-07-26 23:03:54.156139] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.657 [2024-07-26 23:03:54.156169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.657 [2024-07-26 23:03:54.156187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.915 [2024-07-26 23:03:54.168071] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.915 [2024-07-26 23:03:54.168117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.915 [2024-07-26 23:03:54.168136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.915 [2024-07-26 23:03:54.180052] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.915 [2024-07-26 23:03:54.180107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.915 [2024-07-26 23:03:54.180126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.915 [2024-07-26 23:03:54.192152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.915 [2024-07-26 23:03:54.192179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.915 [2024-07-26 23:03:54.192199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.915 [2024-07-26 23:03:54.204096] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.915 [2024-07-26 23:03:54.204124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.915 [2024-07-26 23:03:54.204145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.915 [2024-07-26 23:03:54.215984] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.915 [2024-07-26 23:03:54.216016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.915 [2024-07-26 23:03:54.216035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.915 [2024-07-26 23:03:54.227852] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.915 [2024-07-26 23:03:54.227890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.915 [2024-07-26 23:03:54.227910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.915 [2024-07-26 23:03:54.239826] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.915 [2024-07-26 23:03:54.239860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.915 [2024-07-26 23:03:54.239881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.915 [2024-07-26 23:03:54.251683] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.915 [2024-07-26 23:03:54.251717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.915 [2024-07-26 23:03:54.251736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.915 [2024-07-26 23:03:54.263639] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.915 [2024-07-26 23:03:54.263671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.915 [2024-07-26 23:03:54.263690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.915 [2024-07-26 23:03:54.275443] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.915 [2024-07-26 23:03:54.275475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.915 [2024-07-26 23:03:54.275494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.915 [2024-07-26 23:03:54.287322] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.915 [2024-07-26 23:03:54.287351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.916 [2024-07-26 23:03:54.287392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.916 [2024-07-26 23:03:54.299323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.916 [2024-07-26 23:03:54.299353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.916 [2024-07-26 23:03:54.299370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.916 [2024-07-26 23:03:54.311294] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.916 [2024-07-26 23:03:54.311322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.916 [2024-07-26 23:03:54.311342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.916 [2024-07-26 23:03:54.323129] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.916 [2024-07-26 23:03:54.323158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.916 [2024-07-26 23:03:54.323179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.916 [2024-07-26 23:03:54.335296] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.916 [2024-07-26 23:03:54.335338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.916 [2024-07-26 23:03:54.335358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.916 [2024-07-26 23:03:54.347342] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.916 [2024-07-26 23:03:54.347371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.916 [2024-07-26 23:03:54.347404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.916 [2024-07-26 23:03:54.359526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.916 [2024-07-26 23:03:54.359558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.916 [2024-07-26 23:03:54.359577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.916 [2024-07-26 23:03:54.371417] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.916 [2024-07-26 23:03:54.371460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.916 [2024-07-26 23:03:54.371480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.916 [2024-07-26 23:03:54.383448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.916 [2024-07-26 23:03:54.383481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.916 [2024-07-26 23:03:54.383501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.916 [2024-07-26 23:03:54.395658] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.916 [2024-07-26 23:03:54.395690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.916 [2024-07-26 23:03:54.395709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.916 [2024-07-26 23:03:54.407560] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:01.916 [2024-07-26 23:03:54.407593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.916 [2024-07-26 23:03:54.407612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.174 [2024-07-26 23:03:54.419489] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:02.174 [2024-07-26 23:03:54.419523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.174 [2024-07-26 23:03:54.419542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.174 [2024-07-26 23:03:54.431485] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:02.174 [2024-07-26 23:03:54.431523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.174 [2024-07-26 23:03:54.431543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.174 [2024-07-26 23:03:54.443460] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:02.174 [2024-07-26 23:03:54.443492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.174 [2024-07-26 23:03:54.443511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.174 [2024-07-26 23:03:54.455477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:02.174 [2024-07-26 23:03:54.455509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.174 [2024-07-26 23:03:54.455528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.174 [2024-07-26 23:03:54.467416] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:02.174 [2024-07-26 23:03:54.467448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.174 [2024-07-26 23:03:54.467466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.174 [2024-07-26 23:03:54.479359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:02.174 [2024-07-26 23:03:54.479402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.174 [2024-07-26 23:03:54.479428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.174 [2024-07-26 23:03:54.491349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:02.174 [2024-07-26 23:03:54.491394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.174 [2024-07-26 23:03:54.491418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.174 [2024-07-26 23:03:54.503325] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:02.174 [2024-07-26 23:03:54.503355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.174 [2024-07-26 23:03:54.503387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.174 [2024-07-26 23:03:54.515170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:02.174 [2024-07-26 23:03:54.515200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.174 [2024-07-26 23:03:54.515218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.174 [2024-07-26 23:03:54.527324] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:02.175 [2024-07-26 23:03:54.527368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.175 [2024-07-26 23:03:54.527386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.175 [2024-07-26 23:03:54.539355] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:02.175 [2024-07-26 23:03:54.539402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.175 [2024-07-26 23:03:54.539421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.175 [2024-07-26 23:03:54.551299] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:02.175 [2024-07-26 23:03:54.551342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.175 [2024-07-26 23:03:54.551363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.175 [2024-07-26 23:03:54.563334] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:02.175 [2024-07-26 23:03:54.563380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.175 [2024-07-26 23:03:54.563404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.175 [2024-07-26 23:03:54.575338] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:02.175 [2024-07-26 23:03:54.575386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.175 [2024-07-26 23:03:54.575412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.175 [2024-07-26 23:03:54.587549] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:02.175 [2024-07-26 23:03:54.587582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.175 [2024-07-26 23:03:54.587601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.175 [2024-07-26 23:03:54.599427] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:02.175 [2024-07-26 23:03:54.599460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.175 [2024-07-26 23:03:54.599479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.175 [2024-07-26 23:03:54.611291] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:02.175 [2024-07-26 23:03:54.611320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.175 [2024-07-26 23:03:54.611339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.175 [2024-07-26 23:03:54.623358] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:02.175 [2024-07-26 23:03:54.623403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.175 [2024-07-26 23:03:54.623429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.175 [2024-07-26 23:03:54.635408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:02.175 [2024-07-26 23:03:54.635449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.175 [2024-07-26 23:03:54.635476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.175 [2024-07-26 23:03:54.647431] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:02.175 [2024-07-26 23:03:54.647464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.175 [2024-07-26 23:03:54.647483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.175 [2024-07-26 23:03:54.659590] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:02.175 [2024-07-26 23:03:54.659623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.175 [2024-07-26 23:03:54.659644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.175 [2024-07-26 23:03:54.671585] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:02.175 [2024-07-26 23:03:54.671617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.175 [2024-07-26 23:03:54.671638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.433 [2024-07-26 23:03:54.683363] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:02.433 [2024-07-26 23:03:54.683392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.433 [2024-07-26 23:03:54.683429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.433 [2024-07-26 23:03:54.695327] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:02.433 [2024-07-26 23:03:54.695376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.433 [2024-07-26 23:03:54.695393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.434 [2024-07-26 23:03:54.707156] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b2c0) 00:34:02.434 [2024-07-26 23:03:54.707186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.434 [2024-07-26 23:03:54.707207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.434 00:34:02.434 Latency(us) 00:34:02.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:02.434 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:02.434 nvme0n1 : 2.00 2583.03 322.88 0.00 0.00 6187.46 5582.70 12718.84 00:34:02.434 =================================================================================================================== 00:34:02.434 Total : 2583.03 322.88 0.00 0.00 6187.46 5582.70 12718.84 00:34:02.434 0 00:34:02.434 23:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:02.434 23:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:02.434 23:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:02.434 23:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:02.434 | .driver_specific 00:34:02.434 | .nvme_error 00:34:02.434 | .status_code 00:34:02.434 | .command_transient_transport_error' 00:34:02.692 23:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 167 > 0 )) 00:34:02.692 23:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3692338 00:34:02.692 23:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3692338 ']' 00:34:02.692 23:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3692338 00:34:02.692 23:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:34:02.692 23:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:02.692 23:03:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3692338 00:34:02.692 23:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:02.692 23:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:02.692 23:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3692338' 00:34:02.692 killing process with pid 3692338 00:34:02.692 23:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3692338 00:34:02.692 Received shutdown signal, test time was about 2.000000 seconds 00:34:02.692 00:34:02.692 Latency(us) 00:34:02.692 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:02.692 =================================================================================================================== 00:34:02.692 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:02.692 23:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3692338 00:34:02.950 23:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:34:02.950 23:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:02.950 23:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:02.950 23:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:34:02.950 23:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:34:02.950 23:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3692748 00:34:02.950 23:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:34:02.951 23:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3692748 /var/tmp/bperf.sock 00:34:02.951 23:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3692748 ']' 00:34:02.951 23:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:02.951 23:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:02.951 23:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:02.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:02.951 23:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:02.951 23:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:02.951 [2024-07-26 23:03:55.283543] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:34:02.951 [2024-07-26 23:03:55.283635] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3692748 ] 00:34:02.951 EAL: No free 2048 kB hugepages reported on node 1 00:34:02.951 [2024-07-26 23:03:55.346111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:02.951 [2024-07-26 23:03:55.436604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:03.209 23:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:03.209 23:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:34:03.209 23:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:03.209 23:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:03.467 23:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:03.467 23:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.467 23:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:03.467 23:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.467 23:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:03.467 23:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:03.725 nvme0n1 00:34:03.725 23:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:03.725 23:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.725 23:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:03.725 23:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.725 23:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:03.725 23:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:03.983 Running I/O for 2 seconds... 00:34:03.983 [2024-07-26 23:03:56.320980] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:03.983 [2024-07-26 23:03:56.321253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.983 [2024-07-26 23:03:56.321311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.983 [2024-07-26 23:03:56.334686] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:03.983 [2024-07-26 23:03:56.335010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.983 [2024-07-26 23:03:56.335056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.983 [2024-07-26 23:03:56.348200] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:03.983 [2024-07-26 23:03:56.348458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.983 [2024-07-26 23:03:56.348506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.983 [2024-07-26 23:03:56.361824] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:03.983 [2024-07-26 23:03:56.362096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.983 [2024-07-26 23:03:56.362149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.983 [2024-07-26 23:03:56.375219] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:03.983 [2024-07-26 23:03:56.375488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.983 [2024-07-26 23:03:56.375516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.983 [2024-07-26 23:03:56.389034] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:03.983 [2024-07-26 23:03:56.389371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.983 [2024-07-26 23:03:56.389415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.983 [2024-07-26 23:03:56.402814] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:03.983 [2024-07-26 23:03:56.403143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.983 [2024-07-26 23:03:56.403171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.983 [2024-07-26 23:03:56.416192] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:03.983 [2024-07-26 23:03:56.416503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.983 [2024-07-26 23:03:56.416531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.983 [2024-07-26 23:03:56.429678] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:03.983 [2024-07-26 23:03:56.429978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.983 [2024-07-26 23:03:56.430006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.983 [2024-07-26 23:03:56.442844] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:03.983 [2024-07-26 23:03:56.443151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.984 [2024-07-26 23:03:56.443180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.984 [2024-07-26 23:03:56.456069] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:03.984 [2024-07-26 23:03:56.456367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.984 [2024-07-26 23:03:56.456395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.984 [2024-07-26 23:03:56.469465] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:03.984 [2024-07-26 23:03:56.469752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.984 [2024-07-26 23:03:56.469781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.984 [2024-07-26 23:03:56.482720] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:03.984 [2024-07-26 23:03:56.482986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.984 [2024-07-26 23:03:56.483013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.242 [2024-07-26 23:03:56.495858] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.243 [2024-07-26 23:03:56.496213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.243 [2024-07-26 23:03:56.496243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.243 [2024-07-26 23:03:56.509198] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.243 [2024-07-26 23:03:56.509478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.243 [2024-07-26 23:03:56.509507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.243 [2024-07-26 23:03:56.522545] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.243 [2024-07-26 23:03:56.522863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.243 [2024-07-26 23:03:56.522892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.243 [2024-07-26 23:03:56.535958] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.243 [2024-07-26 23:03:56.536268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.243 [2024-07-26 23:03:56.536296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.243 [2024-07-26 23:03:56.549281] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.243 [2024-07-26 23:03:56.549572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.243 [2024-07-26 23:03:56.549600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.243 [2024-07-26 23:03:56.562606] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.243 [2024-07-26 23:03:56.562905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.243 [2024-07-26 23:03:56.562932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.243 [2024-07-26 23:03:56.575826] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.243 [2024-07-26 23:03:56.576085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.243 [2024-07-26 23:03:56.576131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.243 [2024-07-26 23:03:56.589148] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.243 [2024-07-26 23:03:56.589452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.243 [2024-07-26 23:03:56.589481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.243 [2024-07-26 23:03:56.602490] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.243 [2024-07-26 23:03:56.602736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.243 [2024-07-26 23:03:56.602766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.243 [2024-07-26 23:03:56.615752] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.243 [2024-07-26 23:03:56.616089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.243 [2024-07-26 23:03:56.616133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.243 [2024-07-26 23:03:56.629226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.243 [2024-07-26 23:03:56.629549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.243 [2024-07-26 23:03:56.629577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.243 [2024-07-26 23:03:56.642532] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.243 [2024-07-26 23:03:56.642764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.243 [2024-07-26 23:03:56.642792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.243 [2024-07-26 23:03:56.655882] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.243 [2024-07-26 23:03:56.656174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.243 [2024-07-26 23:03:56.656204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.243 [2024-07-26 23:03:56.669436] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.243 [2024-07-26 23:03:56.669701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.243 [2024-07-26 23:03:56.669729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.243 [2024-07-26 23:03:56.682697] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.243 [2024-07-26 23:03:56.682925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.243 [2024-07-26 23:03:56.682973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.243 [2024-07-26 23:03:56.695948] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.243 [2024-07-26 23:03:56.696212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.243 [2024-07-26 23:03:56.696256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.243 [2024-07-26 23:03:56.709303] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.243 [2024-07-26 23:03:56.709695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.243 [2024-07-26 23:03:56.709732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.243 [2024-07-26 23:03:56.722655] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.243 [2024-07-26 23:03:56.722945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.243 [2024-07-26 23:03:56.722973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.243 [2024-07-26 23:03:56.736118] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.243 [2024-07-26 23:03:56.736484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.243 [2024-07-26 23:03:56.736512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.502 [2024-07-26 23:03:56.749342] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.502 [2024-07-26 23:03:56.749621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.502 [2024-07-26 23:03:56.749650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.502 [2024-07-26 23:03:56.762598] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.502 [2024-07-26 23:03:56.762823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.502 [2024-07-26 23:03:56.762867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.502 [2024-07-26 23:03:56.776097] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.502 [2024-07-26 23:03:56.776351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.502 [2024-07-26 23:03:56.776381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.502 [2024-07-26 23:03:56.789506] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.502 [2024-07-26 23:03:56.789843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.502 [2024-07-26 23:03:56.789871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.502 [2024-07-26 23:03:56.802955] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.502 [2024-07-26 23:03:56.803248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.502 [2024-07-26 23:03:56.803277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.502 [2024-07-26 23:03:56.816546] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.502 [2024-07-26 23:03:56.816777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.502 [2024-07-26 23:03:56.816805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.502 [2024-07-26 23:03:56.829815] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.502 [2024-07-26 23:03:56.830074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.502 [2024-07-26 23:03:56.830117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.503 [2024-07-26 23:03:56.843256] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.503 [2024-07-26 23:03:56.843541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.503 [2024-07-26 23:03:56.843569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.503 [2024-07-26 23:03:56.856515] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.503 [2024-07-26 23:03:56.856744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.503 [2024-07-26 23:03:56.856791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.503 [2024-07-26 23:03:56.869706] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.503 [2024-07-26 23:03:56.869964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.503 [2024-07-26 23:03:56.870007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.503 [2024-07-26 23:03:56.883077] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.503 [2024-07-26 23:03:56.883380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.503 [2024-07-26 23:03:56.883408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.503 [2024-07-26 23:03:56.896402] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.503 [2024-07-26 23:03:56.896666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.503 [2024-07-26 23:03:56.896698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.503 [2024-07-26 23:03:56.909780] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.503 [2024-07-26 23:03:56.910009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.503 [2024-07-26 23:03:56.910066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.503 [2024-07-26 23:03:56.922920] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.503 [2024-07-26 23:03:56.923209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.503 [2024-07-26 23:03:56.923239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.503 [2024-07-26 23:03:56.936300] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.503 [2024-07-26 23:03:56.936624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.503 [2024-07-26 23:03:56.936651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.503 [2024-07-26 23:03:56.949698] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.503 [2024-07-26 23:03:56.949951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.503 [2024-07-26 23:03:56.949979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.503 [2024-07-26 23:03:56.963040] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.503 [2024-07-26 23:03:56.963424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.503 [2024-07-26 23:03:56.963451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.503 [2024-07-26 23:03:56.976294] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.503 [2024-07-26 23:03:56.976560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.503 [2024-07-26 23:03:56.976588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.503 [2024-07-26 23:03:56.989626] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.503 [2024-07-26 23:03:56.989855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.503 [2024-07-26 23:03:56.989885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.503 [2024-07-26 23:03:57.002979] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.503 [2024-07-26 23:03:57.003249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.503 [2024-07-26 23:03:57.003279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.762 [2024-07-26 23:03:57.016203] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.762 [2024-07-26 23:03:57.016560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.762 [2024-07-26 23:03:57.016589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.762 [2024-07-26 23:03:57.029625] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.762 [2024-07-26 23:03:57.029964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.762 [2024-07-26 23:03:57.029992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.762 [2024-07-26 23:03:57.042755] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.762 [2024-07-26 23:03:57.043088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.762 [2024-07-26 23:03:57.043132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.762 [2024-07-26 23:03:57.056177] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.762 [2024-07-26 23:03:57.056489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.762 [2024-07-26 23:03:57.056524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.762 [2024-07-26 23:03:57.069487] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.762 [2024-07-26 23:03:57.069776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.762 [2024-07-26 23:03:57.069804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.762 [2024-07-26 23:03:57.082871] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.762 [2024-07-26 23:03:57.083119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.762 [2024-07-26 23:03:57.083148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.762 [2024-07-26 23:03:57.096101] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.762 [2024-07-26 23:03:57.096422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.762 [2024-07-26 23:03:57.096466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.762 [2024-07-26 23:03:57.109335] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.762 [2024-07-26 23:03:57.109627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.762 [2024-07-26 23:03:57.109654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.762 [2024-07-26 23:03:57.122693] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.762 [2024-07-26 23:03:57.122919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.762 [2024-07-26 23:03:57.122968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.762 [2024-07-26 23:03:57.136055] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.762 [2024-07-26 23:03:57.136362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.762 [2024-07-26 23:03:57.136404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.762 [2024-07-26 23:03:57.149471] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.762 [2024-07-26 23:03:57.149704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.762 [2024-07-26 23:03:57.149731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.762 [2024-07-26 23:03:57.162678] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.762 [2024-07-26 23:03:57.162908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.762 [2024-07-26 23:03:57.162949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.762 [2024-07-26 23:03:57.175887] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.762 [2024-07-26 23:03:57.176136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.762 [2024-07-26 23:03:57.176184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.762 [2024-07-26 23:03:57.189249] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.762 [2024-07-26 23:03:57.189535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.762 [2024-07-26 23:03:57.189562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.762 [2024-07-26 23:03:57.202575] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.762 [2024-07-26 23:03:57.202829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.762 [2024-07-26 23:03:57.202857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.762 [2024-07-26 23:03:57.215896] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.762 [2024-07-26 23:03:57.216161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.762 [2024-07-26 23:03:57.216189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.762 [2024-07-26 23:03:57.229386] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.762 [2024-07-26 23:03:57.229659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.762 [2024-07-26 23:03:57.229686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.762 [2024-07-26 23:03:57.242754] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.763 [2024-07-26 23:03:57.242985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.763 [2024-07-26 23:03:57.243011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:04.763 [2024-07-26 23:03:57.256071] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:04.763 [2024-07-26 23:03:57.256352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.763 [2024-07-26 23:03:57.256380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.021 [2024-07-26 23:03:57.269231] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.021 [2024-07-26 23:03:57.269572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.021 [2024-07-26 23:03:57.269600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.021 [2024-07-26 23:03:57.282969] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.021 [2024-07-26 23:03:57.283263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.021 [2024-07-26 23:03:57.283298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.021 [2024-07-26 23:03:57.297161] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.021 [2024-07-26 23:03:57.297496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.021 [2024-07-26 23:03:57.297527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.021 [2024-07-26 23:03:57.311280] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.021 [2024-07-26 23:03:57.311580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.021 [2024-07-26 23:03:57.311612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.021 [2024-07-26 23:03:57.325351] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.021 [2024-07-26 23:03:57.325658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.021 [2024-07-26 23:03:57.325688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.021 [2024-07-26 23:03:57.339441] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.021 [2024-07-26 23:03:57.339741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.021 [2024-07-26 23:03:57.339773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.021 [2024-07-26 23:03:57.353453] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.021 [2024-07-26 23:03:57.353755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.021 [2024-07-26 23:03:57.353789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.021 [2024-07-26 23:03:57.367466] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.021 [2024-07-26 23:03:57.367734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.021 [2024-07-26 23:03:57.367766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.021 [2024-07-26 23:03:57.381490] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.021 [2024-07-26 23:03:57.381786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.021 [2024-07-26 23:03:57.381818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.021 [2024-07-26 23:03:57.395505] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.021 [2024-07-26 23:03:57.395799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.021 [2024-07-26 23:03:57.395830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.021 [2024-07-26 23:03:57.409452] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.021 [2024-07-26 23:03:57.409784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.021 [2024-07-26 23:03:57.409821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.021 [2024-07-26 23:03:57.423508] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.021 [2024-07-26 23:03:57.423799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.021 [2024-07-26 23:03:57.423831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.021 [2024-07-26 23:03:57.437431] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.021 [2024-07-26 23:03:57.437702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.021 [2024-07-26 23:03:57.437734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.021 [2024-07-26 23:03:57.451316] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.021 [2024-07-26 23:03:57.451595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.021 [2024-07-26 23:03:57.451626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.021 [2024-07-26 23:03:57.465512] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.021 [2024-07-26 23:03:57.465806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.021 [2024-07-26 23:03:57.465838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.021 [2024-07-26 23:03:57.479933] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.021 [2024-07-26 23:03:57.480261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.021 [2024-07-26 23:03:57.480288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.021 [2024-07-26 23:03:57.493940] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.022 [2024-07-26 23:03:57.494254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.022 [2024-07-26 23:03:57.494281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.022 [2024-07-26 23:03:57.507936] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.022 [2024-07-26 23:03:57.508238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.022 [2024-07-26 23:03:57.508267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.022 [2024-07-26 23:03:57.521956] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.022 [2024-07-26 23:03:57.522267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.022 [2024-07-26 23:03:57.522293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.280 [2024-07-26 23:03:57.535998] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.280 [2024-07-26 23:03:57.536407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.280 [2024-07-26 23:03:57.536439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.280 [2024-07-26 23:03:57.550612] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.280 [2024-07-26 23:03:57.550874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.280 [2024-07-26 23:03:57.550905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.280 [2024-07-26 23:03:57.564857] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.280 [2024-07-26 23:03:57.565156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.280 [2024-07-26 23:03:57.565184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.280 [2024-07-26 23:03:57.578915] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.280 [2024-07-26 23:03:57.579213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.280 [2024-07-26 23:03:57.579239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.280 [2024-07-26 23:03:57.592888] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.280 [2024-07-26 23:03:57.593165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.280 [2024-07-26 23:03:57.593206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.281 [2024-07-26 23:03:57.606886] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.281 [2024-07-26 23:03:57.607186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.281 [2024-07-26 23:03:57.607215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.281 [2024-07-26 23:03:57.620865] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.281 [2024-07-26 23:03:57.621173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.281 [2024-07-26 23:03:57.621201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.281 [2024-07-26 23:03:57.634905] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.281 [2024-07-26 23:03:57.635229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.281 [2024-07-26 23:03:57.635257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.281 [2024-07-26 23:03:57.648879] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.281 [2024-07-26 23:03:57.649175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.281 [2024-07-26 23:03:57.649204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.281 [2024-07-26 23:03:57.662880] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.281 [2024-07-26 23:03:57.663189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.281 [2024-07-26 23:03:57.663219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.281 [2024-07-26 23:03:57.676895] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.281 [2024-07-26 23:03:57.677218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.281 [2024-07-26 23:03:57.677246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.281 [2024-07-26 23:03:57.690902] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.281 [2024-07-26 23:03:57.691201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.281 [2024-07-26 23:03:57.691246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.281 [2024-07-26 23:03:57.704859] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.281 [2024-07-26 23:03:57.705137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.281 [2024-07-26 23:03:57.705181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.281 [2024-07-26 23:03:57.718936] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.281 [2024-07-26 23:03:57.719311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.281 [2024-07-26 23:03:57.719340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.281 [2024-07-26 23:03:57.732866] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.281 [2024-07-26 23:03:57.733164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.281 [2024-07-26 23:03:57.733194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.281 [2024-07-26 23:03:57.746863] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.281 [2024-07-26 23:03:57.747177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.281 [2024-07-26 23:03:57.747207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.281 [2024-07-26 23:03:57.760904] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.281 [2024-07-26 23:03:57.761225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.281 [2024-07-26 23:03:57.761252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.281 [2024-07-26 23:03:57.774885] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.281 [2024-07-26 23:03:57.775156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.281 [2024-07-26 23:03:57.775184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.539 [2024-07-26 23:03:57.788805] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.539 [2024-07-26 23:03:57.789081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.539 [2024-07-26 23:03:57.789125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.539 [2024-07-26 23:03:57.802699] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.539 [2024-07-26 23:03:57.802963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.539 [2024-07-26 23:03:57.803000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.539 [2024-07-26 23:03:57.816678] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.540 [2024-07-26 23:03:57.816969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.540 [2024-07-26 23:03:57.817001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.540 [2024-07-26 23:03:57.830861] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.540 [2024-07-26 23:03:57.831143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.540 [2024-07-26 23:03:57.831185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.540 [2024-07-26 23:03:57.844858] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.540 [2024-07-26 23:03:57.845162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.540 [2024-07-26 23:03:57.845191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.540 [2024-07-26 23:03:57.858778] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.540 [2024-07-26 23:03:57.859045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.540 [2024-07-26 23:03:57.859088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.540 [2024-07-26 23:03:57.872782] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.540 [2024-07-26 23:03:57.873052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.540 [2024-07-26 23:03:57.873106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.540 [2024-07-26 23:03:57.886730] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.540 [2024-07-26 23:03:57.887025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.540 [2024-07-26 23:03:57.887057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.540 [2024-07-26 23:03:57.900759] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.540 [2024-07-26 23:03:57.901052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.540 [2024-07-26 23:03:57.901109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.540 [2024-07-26 23:03:57.914719] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.540 [2024-07-26 23:03:57.915005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.540 [2024-07-26 23:03:57.915037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.540 [2024-07-26 23:03:57.928798] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.540 [2024-07-26 23:03:57.929117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.540 [2024-07-26 23:03:57.929145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.540 [2024-07-26 23:03:57.942793] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.540 [2024-07-26 23:03:57.943086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.540 [2024-07-26 23:03:57.943131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.540 [2024-07-26 23:03:57.956849] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.540 [2024-07-26 23:03:57.957150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.540 [2024-07-26 23:03:57.957178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.540 [2024-07-26 23:03:57.970945] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.540 [2024-07-26 23:03:57.971307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.540 [2024-07-26 23:03:57.971336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.540 [2024-07-26 23:03:57.984967] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.540 [2024-07-26 23:03:57.985331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.540 [2024-07-26 23:03:57.985375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.540 [2024-07-26 23:03:57.998996] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.540 [2024-07-26 23:03:57.999315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.540 [2024-07-26 23:03:57.999342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.540 [2024-07-26 23:03:58.012971] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.540 [2024-07-26 23:03:58.013352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.540 [2024-07-26 23:03:58.013384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.540 [2024-07-26 23:03:58.026979] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.540 [2024-07-26 23:03:58.027286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.540 [2024-07-26 23:03:58.027314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.540 [2024-07-26 23:03:58.040987] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.540 [2024-07-26 23:03:58.041305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.540 [2024-07-26 23:03:58.041334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.799 [2024-07-26 23:03:58.054975] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.799 [2024-07-26 23:03:58.055282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.799 [2024-07-26 23:03:58.055310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.799 [2024-07-26 23:03:58.068923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.799 [2024-07-26 23:03:58.069238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.799 [2024-07-26 23:03:58.069267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.799 [2024-07-26 23:03:58.082985] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.799 [2024-07-26 23:03:58.083291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.799 [2024-07-26 23:03:58.083319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.799 [2024-07-26 23:03:58.096914] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.799 [2024-07-26 23:03:58.097218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.799 [2024-07-26 23:03:58.097247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.799 [2024-07-26 23:03:58.110972] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.799 [2024-07-26 23:03:58.111316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.799 [2024-07-26 23:03:58.111345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.799 [2024-07-26 23:03:58.125082] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.799 [2024-07-26 23:03:58.125387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.799 [2024-07-26 23:03:58.125415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.799 [2024-07-26 23:03:58.139046] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.799 [2024-07-26 23:03:58.139327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.799 [2024-07-26 23:03:58.139355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.799 [2024-07-26 23:03:58.153206] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.799 [2024-07-26 23:03:58.153555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.799 [2024-07-26 23:03:58.153586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.799 [2024-07-26 23:03:58.167244] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.799 [2024-07-26 23:03:58.167536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.799 [2024-07-26 23:03:58.167568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.799 [2024-07-26 23:03:58.181206] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.799 [2024-07-26 23:03:58.181498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:55 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.799 [2024-07-26 23:03:58.181531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.799 [2024-07-26 23:03:58.195148] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.799 [2024-07-26 23:03:58.195412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.799 [2024-07-26 23:03:58.195444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.799 [2024-07-26 23:03:58.209201] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.799 [2024-07-26 23:03:58.209479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.799 [2024-07-26 23:03:58.209511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.799 [2024-07-26 23:03:58.223254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.799 [2024-07-26 23:03:58.223552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.799 [2024-07-26 23:03:58.223584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.799 [2024-07-26 23:03:58.237159] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.799 [2024-07-26 23:03:58.237521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.799 [2024-07-26 23:03:58.237553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.799 [2024-07-26 23:03:58.251153] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.799 [2024-07-26 23:03:58.251458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.799 [2024-07-26 23:03:58.251489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.799 [2024-07-26 23:03:58.265123] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.799 [2024-07-26 23:03:58.265418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.799 [2024-07-26 23:03:58.265466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.799 [2024-07-26 23:03:58.279165] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.799 [2024-07-26 23:03:58.279490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.799 [2024-07-26 23:03:58.279517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.799 [2024-07-26 23:03:58.293269] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:05.799 [2024-07-26 23:03:58.293568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.799 [2024-07-26 23:03:58.293600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:06.056 [2024-07-26 23:03:58.306742] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325910) with pdu=0x2000190fef90 00:34:06.056 [2024-07-26 23:03:58.307016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.056 [2024-07-26 23:03:58.307066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:06.056 00:34:06.056 Latency(us) 00:34:06.056 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:06.056 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:06.056 nvme0n1 : 2.01 18632.56 72.78 0.00 0.00 6854.32 4538.97 14466.47 00:34:06.056 =================================================================================================================== 00:34:06.056 Total : 18632.56 72.78 0.00 0.00 6854.32 4538.97 14466.47 00:34:06.056 0 00:34:06.056 23:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:06.056 23:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:06.056 23:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:06.056 | .driver_specific 00:34:06.056 | .nvme_error 00:34:06.056 | .status_code 00:34:06.056 | .command_transient_transport_error' 00:34:06.056 23:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:06.314 23:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 146 > 0 )) 00:34:06.314 23:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3692748 00:34:06.314 23:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3692748 ']' 00:34:06.314 23:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3692748 00:34:06.314 23:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:34:06.314 23:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:06.314 23:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3692748 00:34:06.314 23:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:06.314 23:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:06.314 23:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3692748' 00:34:06.314 killing process with pid 3692748 00:34:06.314 23:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3692748 00:34:06.314 Received shutdown signal, test time was about 2.000000 seconds 00:34:06.314 00:34:06.314 Latency(us) 00:34:06.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:06.314 =================================================================================================================== 00:34:06.314 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:06.314 23:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3692748 00:34:06.572 23:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:34:06.573 23:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:06.573 23:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:06.573 23:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:06.573 23:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:06.573 23:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3693152 00:34:06.573 23:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:34:06.573 23:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3693152 /var/tmp/bperf.sock 00:34:06.573 23:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3693152 ']' 00:34:06.573 23:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:06.573 23:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:06.573 23:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:06.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:06.573 23:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:06.573 23:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:06.573 [2024-07-26 23:03:58.876797] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:34:06.573 [2024-07-26 23:03:58.876883] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3693152 ] 00:34:06.573 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:06.573 Zero copy mechanism will not be used. 00:34:06.573 EAL: No free 2048 kB hugepages reported on node 1 00:34:06.573 [2024-07-26 23:03:58.934331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:06.573 [2024-07-26 23:03:59.019327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:06.831 23:03:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:06.831 23:03:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:34:06.831 23:03:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:06.831 23:03:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:07.089 23:03:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:07.089 23:03:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.089 23:03:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:07.089 23:03:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.089 23:03:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:07.089 23:03:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:07.347 nvme0n1 00:34:07.347 23:03:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:07.347 23:03:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.347 23:03:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:07.347 23:03:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.347 23:03:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:07.347 23:03:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:07.606 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:07.606 Zero copy mechanism will not be used. 00:34:07.606 Running I/O for 2 seconds... 00:34:07.606 [2024-07-26 23:03:59.873246] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:07.606 [2024-07-26 23:03:59.873660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.606 [2024-07-26 23:03:59.873702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.606 [2024-07-26 23:03:59.891290] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:07.606 [2024-07-26 23:03:59.891699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.606 [2024-07-26 23:03:59.891735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.606 [2024-07-26 23:03:59.911260] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:07.606 [2024-07-26 23:03:59.911724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.606 [2024-07-26 23:03:59.911758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.606 [2024-07-26 23:03:59.929459] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:07.606 [2024-07-26 23:03:59.929937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.606 [2024-07-26 23:03:59.929971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.606 [2024-07-26 23:03:59.948676] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:07.606 [2024-07-26 23:03:59.949079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.606 [2024-07-26 23:03:59.949124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.606 [2024-07-26 23:03:59.967244] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:07.606 [2024-07-26 23:03:59.967672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.606 [2024-07-26 23:03:59.967706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.606 [2024-07-26 23:03:59.986476] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:07.606 [2024-07-26 23:03:59.986947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.606 [2024-07-26 23:03:59.986975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.606 [2024-07-26 23:04:00.005091] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:07.606 [2024-07-26 23:04:00.005489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.606 [2024-07-26 23:04:00.005519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.606 [2024-07-26 23:04:00.021912] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:07.606 [2024-07-26 23:04:00.022359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.606 [2024-07-26 23:04:00.022391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.606 [2024-07-26 23:04:00.039775] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:07.606 [2024-07-26 23:04:00.040162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.606 [2024-07-26 23:04:00.040192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.606 [2024-07-26 23:04:00.058225] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:07.606 [2024-07-26 23:04:00.058581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.606 [2024-07-26 23:04:00.058608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.606 [2024-07-26 23:04:00.078319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:07.606 [2024-07-26 23:04:00.078798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.606 [2024-07-26 23:04:00.078827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.606 [2024-07-26 23:04:00.095904] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:07.606 [2024-07-26 23:04:00.096273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.606 [2024-07-26 23:04:00.096303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.865 [2024-07-26 23:04:00.114124] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:07.865 [2024-07-26 23:04:00.114480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.865 [2024-07-26 23:04:00.114507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.865 [2024-07-26 23:04:00.131516] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:07.865 [2024-07-26 23:04:00.131876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.865 [2024-07-26 23:04:00.131915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.865 [2024-07-26 23:04:00.149707] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:07.865 [2024-07-26 23:04:00.150033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.865 [2024-07-26 23:04:00.150086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.865 [2024-07-26 23:04:00.167879] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:07.865 [2024-07-26 23:04:00.168307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.865 [2024-07-26 23:04:00.168337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.865 [2024-07-26 23:04:00.185491] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:07.865 [2024-07-26 23:04:00.185830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.865 [2024-07-26 23:04:00.185858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.865 [2024-07-26 23:04:00.203459] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:07.865 [2024-07-26 23:04:00.203816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.865 [2024-07-26 23:04:00.203843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.865 [2024-07-26 23:04:00.222334] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:07.865 [2024-07-26 23:04:00.222737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.865 [2024-07-26 23:04:00.222765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.865 [2024-07-26 23:04:00.240999] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:07.865 [2024-07-26 23:04:00.241507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.865 [2024-07-26 23:04:00.241535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.865 [2024-07-26 23:04:00.259387] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:07.865 [2024-07-26 23:04:00.259741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.865 [2024-07-26 23:04:00.259768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.865 [2024-07-26 23:04:00.278395] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:07.865 [2024-07-26 23:04:00.278738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.865 [2024-07-26 23:04:00.278766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.865 [2024-07-26 23:04:00.297484] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:07.865 [2024-07-26 23:04:00.297917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.865 [2024-07-26 23:04:00.297945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.865 [2024-07-26 23:04:00.316837] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:07.865 [2024-07-26 23:04:00.317291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.865 [2024-07-26 23:04:00.317322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.865 [2024-07-26 23:04:00.334324] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:07.865 [2024-07-26 23:04:00.334630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.865 [2024-07-26 23:04:00.334657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.865 [2024-07-26 23:04:00.353961] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:07.865 [2024-07-26 23:04:00.354432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.865 [2024-07-26 23:04:00.354461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.123 [2024-07-26 23:04:00.372411] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.123 [2024-07-26 23:04:00.372828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.123 [2024-07-26 23:04:00.372856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.123 [2024-07-26 23:04:00.391242] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.123 [2024-07-26 23:04:00.391585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.123 [2024-07-26 23:04:00.391614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.123 [2024-07-26 23:04:00.410204] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.123 [2024-07-26 23:04:00.410625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.123 [2024-07-26 23:04:00.410655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.123 [2024-07-26 23:04:00.428758] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.123 [2024-07-26 23:04:00.429131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.123 [2024-07-26 23:04:00.429175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.123 [2024-07-26 23:04:00.446543] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.123 [2024-07-26 23:04:00.446935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.123 [2024-07-26 23:04:00.446963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.123 [2024-07-26 23:04:00.464300] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.123 [2024-07-26 23:04:00.464662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.124 [2024-07-26 23:04:00.464691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.124 [2024-07-26 23:04:00.482923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.124 [2024-07-26 23:04:00.483292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.124 [2024-07-26 23:04:00.483321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.124 [2024-07-26 23:04:00.501767] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.124 [2024-07-26 23:04:00.502144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.124 [2024-07-26 23:04:00.502174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.124 [2024-07-26 23:04:00.518621] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.124 [2024-07-26 23:04:00.519002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.124 [2024-07-26 23:04:00.519030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.124 [2024-07-26 23:04:00.538046] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.124 [2024-07-26 23:04:00.538455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.124 [2024-07-26 23:04:00.538489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.124 [2024-07-26 23:04:00.555760] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.124 [2024-07-26 23:04:00.556138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.124 [2024-07-26 23:04:00.556167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.124 [2024-07-26 23:04:00.573742] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.124 [2024-07-26 23:04:00.574191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.124 [2024-07-26 23:04:00.574220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.124 [2024-07-26 23:04:00.590637] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.124 [2024-07-26 23:04:00.590980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.124 [2024-07-26 23:04:00.591007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.124 [2024-07-26 23:04:00.608615] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.124 [2024-07-26 23:04:00.609086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.124 [2024-07-26 23:04:00.609134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.382 [2024-07-26 23:04:00.627238] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.382 [2024-07-26 23:04:00.627609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.382 [2024-07-26 23:04:00.627637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.382 [2024-07-26 23:04:00.646741] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.382 [2024-07-26 23:04:00.647150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.382 [2024-07-26 23:04:00.647193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.382 [2024-07-26 23:04:00.664594] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.382 [2024-07-26 23:04:00.664966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.382 [2024-07-26 23:04:00.664994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.382 [2024-07-26 23:04:00.682041] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.382 [2024-07-26 23:04:00.682518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.382 [2024-07-26 23:04:00.682545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.382 [2024-07-26 23:04:00.699684] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.382 [2024-07-26 23:04:00.700024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.382 [2024-07-26 23:04:00.700074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.382 [2024-07-26 23:04:00.718517] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.382 [2024-07-26 23:04:00.718883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.382 [2024-07-26 23:04:00.718910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.382 [2024-07-26 23:04:00.736635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.382 [2024-07-26 23:04:00.736977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.382 [2024-07-26 23:04:00.737005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.382 [2024-07-26 23:04:00.754776] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.382 [2024-07-26 23:04:00.755239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.382 [2024-07-26 23:04:00.755267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.382 [2024-07-26 23:04:00.773639] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.382 [2024-07-26 23:04:00.773987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.382 [2024-07-26 23:04:00.774015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.382 [2024-07-26 23:04:00.791233] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.382 [2024-07-26 23:04:00.791698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.382 [2024-07-26 23:04:00.791726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.382 [2024-07-26 23:04:00.808520] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.382 [2024-07-26 23:04:00.808862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.382 [2024-07-26 23:04:00.808889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.382 [2024-07-26 23:04:00.827620] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.382 [2024-07-26 23:04:00.827941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.382 [2024-07-26 23:04:00.827968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.382 [2024-07-26 23:04:00.845526] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.382 [2024-07-26 23:04:00.845808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.382 [2024-07-26 23:04:00.845854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.382 [2024-07-26 23:04:00.863721] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.382 [2024-07-26 23:04:00.864128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.382 [2024-07-26 23:04:00.864157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.383 [2024-07-26 23:04:00.881992] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.383 [2024-07-26 23:04:00.882363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.383 [2024-07-26 23:04:00.882409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.640 [2024-07-26 23:04:00.900104] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.640 [2024-07-26 23:04:00.900458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.640 [2024-07-26 23:04:00.900488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.640 [2024-07-26 23:04:00.918402] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.640 [2024-07-26 23:04:00.918783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.640 [2024-07-26 23:04:00.918811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.640 [2024-07-26 23:04:00.935604] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.640 [2024-07-26 23:04:00.936015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.640 [2024-07-26 23:04:00.936057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.640 [2024-07-26 23:04:00.954780] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.640 [2024-07-26 23:04:00.955169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.640 [2024-07-26 23:04:00.955205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.640 [2024-07-26 23:04:00.973763] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.640 [2024-07-26 23:04:00.974224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.640 [2024-07-26 23:04:00.974253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.640 [2024-07-26 23:04:00.993000] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.640 [2024-07-26 23:04:00.993400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.640 [2024-07-26 23:04:00.993439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.640 [2024-07-26 23:04:01.012197] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.640 [2024-07-26 23:04:01.012612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.640 [2024-07-26 23:04:01.012639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.640 [2024-07-26 23:04:01.029881] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.640 [2024-07-26 23:04:01.030428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.640 [2024-07-26 23:04:01.030456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.640 [2024-07-26 23:04:01.048783] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.640 [2024-07-26 23:04:01.049273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.640 [2024-07-26 23:04:01.049301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.640 [2024-07-26 23:04:01.068005] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.640 [2024-07-26 23:04:01.068495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.640 [2024-07-26 23:04:01.068523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.640 [2024-07-26 23:04:01.086532] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.640 [2024-07-26 23:04:01.086903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.640 [2024-07-26 23:04:01.086956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.640 [2024-07-26 23:04:01.105129] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.640 [2024-07-26 23:04:01.105566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.640 [2024-07-26 23:04:01.105593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.640 [2024-07-26 23:04:01.123174] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.640 [2024-07-26 23:04:01.123580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.640 [2024-07-26 23:04:01.123623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.640 [2024-07-26 23:04:01.142075] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.640 [2024-07-26 23:04:01.142485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.640 [2024-07-26 23:04:01.142524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.898 [2024-07-26 23:04:01.160649] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.898 [2024-07-26 23:04:01.161095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.898 [2024-07-26 23:04:01.161139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.898 [2024-07-26 23:04:01.179452] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.898 [2024-07-26 23:04:01.179791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.898 [2024-07-26 23:04:01.179819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.898 [2024-07-26 23:04:01.198204] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.898 [2024-07-26 23:04:01.198570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.898 [2024-07-26 23:04:01.198598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.898 [2024-07-26 23:04:01.217125] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.898 [2024-07-26 23:04:01.217542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.898 [2024-07-26 23:04:01.217588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.898 [2024-07-26 23:04:01.236024] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.898 [2024-07-26 23:04:01.236447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.898 [2024-07-26 23:04:01.236475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.898 [2024-07-26 23:04:01.254575] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.898 [2024-07-26 23:04:01.254967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.898 [2024-07-26 23:04:01.254994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.898 [2024-07-26 23:04:01.273687] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.898 [2024-07-26 23:04:01.274064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.898 [2024-07-26 23:04:01.274092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.898 [2024-07-26 23:04:01.291272] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.898 [2024-07-26 23:04:01.291730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.898 [2024-07-26 23:04:01.291760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.898 [2024-07-26 23:04:01.310578] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.898 [2024-07-26 23:04:01.310967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.898 [2024-07-26 23:04:01.311005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.898 [2024-07-26 23:04:01.328452] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.898 [2024-07-26 23:04:01.328883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.898 [2024-07-26 23:04:01.328910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.898 [2024-07-26 23:04:01.346550] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.898 [2024-07-26 23:04:01.346939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.898 [2024-07-26 23:04:01.346970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.898 [2024-07-26 23:04:01.364792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.898 [2024-07-26 23:04:01.365219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.898 [2024-07-26 23:04:01.365248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.898 [2024-07-26 23:04:01.382603] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:08.898 [2024-07-26 23:04:01.383037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.898 [2024-07-26 23:04:01.383086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.156 [2024-07-26 23:04:01.401155] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:09.156 [2024-07-26 23:04:01.401666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.156 [2024-07-26 23:04:01.401693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.156 [2024-07-26 23:04:01.420131] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:09.156 [2024-07-26 23:04:01.420496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.156 [2024-07-26 23:04:01.420524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.156 [2024-07-26 23:04:01.439818] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:09.156 [2024-07-26 23:04:01.440261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.156 [2024-07-26 23:04:01.440289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.156 [2024-07-26 23:04:01.457342] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:09.156 [2024-07-26 23:04:01.457772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.156 [2024-07-26 23:04:01.457800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.156 [2024-07-26 23:04:01.475622] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:09.156 [2024-07-26 23:04:01.475963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.156 [2024-07-26 23:04:01.475990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.156 [2024-07-26 23:04:01.493732] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:09.156 [2024-07-26 23:04:01.494085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.156 [2024-07-26 23:04:01.494114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.156 [2024-07-26 23:04:01.510990] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:09.156 [2024-07-26 23:04:01.511438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.156 [2024-07-26 23:04:01.511466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.156 [2024-07-26 23:04:01.529991] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:09.156 [2024-07-26 23:04:01.530452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.156 [2024-07-26 23:04:01.530479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.157 [2024-07-26 23:04:01.546883] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:09.157 [2024-07-26 23:04:01.547259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.157 [2024-07-26 23:04:01.547311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.157 [2024-07-26 23:04:01.566586] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:09.157 [2024-07-26 23:04:01.566965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.157 [2024-07-26 23:04:01.566995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.157 [2024-07-26 23:04:01.584967] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:09.157 [2024-07-26 23:04:01.585440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.157 [2024-07-26 23:04:01.585468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.157 [2024-07-26 23:04:01.603183] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:09.157 [2024-07-26 23:04:01.603640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.157 [2024-07-26 23:04:01.603667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.157 [2024-07-26 23:04:01.621864] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:09.157 [2024-07-26 23:04:01.622239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.157 [2024-07-26 23:04:01.622275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.157 [2024-07-26 23:04:01.640108] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:09.157 [2024-07-26 23:04:01.640521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.157 [2024-07-26 23:04:01.640566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.157 [2024-07-26 23:04:01.658509] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:09.157 [2024-07-26 23:04:01.658908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.157 [2024-07-26 23:04:01.658938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.415 [2024-07-26 23:04:01.677040] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:09.415 [2024-07-26 23:04:01.677609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.415 [2024-07-26 23:04:01.677646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.415 [2024-07-26 23:04:01.695378] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:09.415 [2024-07-26 23:04:01.695806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.415 [2024-07-26 23:04:01.695841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.415 [2024-07-26 23:04:01.713382] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:09.415 [2024-07-26 23:04:01.713743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.415 [2024-07-26 23:04:01.713770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.415 [2024-07-26 23:04:01.731855] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:09.415 [2024-07-26 23:04:01.732227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.415 [2024-07-26 23:04:01.732256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.415 [2024-07-26 23:04:01.750182] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:09.415 [2024-07-26 23:04:01.750458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.415 [2024-07-26 23:04:01.750500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.415 [2024-07-26 23:04:01.768373] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:09.415 [2024-07-26 23:04:01.768645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.415 [2024-07-26 23:04:01.768673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.415 [2024-07-26 23:04:01.787687] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:09.415 [2024-07-26 23:04:01.788028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.415 [2024-07-26 23:04:01.788081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.416 [2024-07-26 23:04:01.805857] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:09.416 [2024-07-26 23:04:01.806281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.416 [2024-07-26 23:04:01.806326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.416 [2024-07-26 23:04:01.824006] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:09.416 [2024-07-26 23:04:01.824420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.416 [2024-07-26 23:04:01.824448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.416 [2024-07-26 23:04:01.843243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:09.416 [2024-07-26 23:04:01.843614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.416 [2024-07-26 23:04:01.843659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.416 [2024-07-26 23:04:01.861452] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1325c50) with pdu=0x2000190fef90 00:34:09.416 [2024-07-26 23:04:01.861824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.416 [2024-07-26 23:04:01.861870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.416 00:34:09.416 Latency(us) 00:34:09.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:09.416 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:34:09.416 nvme0n1 : 2.01 1685.90 210.74 0.00 0.00 9465.07 5194.33 20194.80 00:34:09.416 =================================================================================================================== 00:34:09.416 Total : 1685.90 210.74 0.00 0.00 9465.07 5194.33 20194.80 00:34:09.416 0 00:34:09.416 23:04:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:09.416 23:04:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:09.416 23:04:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:09.416 | .driver_specific 00:34:09.416 | .nvme_error 00:34:09.416 | .status_code 00:34:09.416 | .command_transient_transport_error' 00:34:09.416 23:04:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:09.674 23:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 109 > 0 )) 00:34:09.674 23:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3693152 00:34:09.674 23:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3693152 ']' 00:34:09.674 23:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3693152 00:34:09.674 23:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:34:09.674 23:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:09.674 23:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3693152 00:34:09.674 23:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:09.674 23:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:09.674 23:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3693152' 00:34:09.674 killing process with pid 3693152 00:34:09.674 23:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3693152 00:34:09.674 Received shutdown signal, test time was about 2.000000 seconds 00:34:09.674 00:34:09.674 Latency(us) 00:34:09.674 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:09.674 =================================================================================================================== 00:34:09.674 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:09.674 23:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3693152 00:34:09.933 23:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3691789 00:34:09.933 23:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3691789 ']' 00:34:09.933 23:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3691789 00:34:09.933 23:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:34:09.933 23:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:09.933 23:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3691789 00:34:09.933 23:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:09.933 23:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:09.933 23:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3691789' 00:34:09.933 killing process with pid 3691789 00:34:09.933 23:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3691789 00:34:09.933 23:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3691789 00:34:10.192 00:34:10.192 real 0m14.915s 00:34:10.192 user 0m29.854s 00:34:10.192 sys 0m3.852s 00:34:10.192 23:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:10.192 23:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:10.192 ************************************ 00:34:10.192 END TEST nvmf_digest_error 00:34:10.192 ************************************ 00:34:10.192 23:04:02 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:34:10.192 23:04:02 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:34:10.192 23:04:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:10.192 23:04:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:34:10.192 23:04:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:10.192 23:04:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:34:10.192 23:04:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:10.192 23:04:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:10.488 rmmod nvme_tcp 00:34:10.488 rmmod nvme_fabrics 00:34:10.488 rmmod nvme_keyring 00:34:10.489 23:04:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:10.489 23:04:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:34:10.489 23:04:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:34:10.489 23:04:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3691789 ']' 00:34:10.489 23:04:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3691789 00:34:10.489 23:04:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 3691789 ']' 00:34:10.489 23:04:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 3691789 00:34:10.489 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3691789) - No such process 00:34:10.489 23:04:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 3691789 is not found' 00:34:10.489 Process with pid 3691789 is not found 00:34:10.489 23:04:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:10.489 23:04:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:10.489 23:04:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:10.489 23:04:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:10.489 23:04:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:10.489 23:04:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:10.489 23:04:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:10.489 23:04:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:12.411 23:04:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:12.411 00:34:12.411 real 0m34.729s 00:34:12.411 user 1m0.402s 00:34:12.411 sys 0m9.728s 00:34:12.411 23:04:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:12.411 23:04:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:12.411 ************************************ 00:34:12.411 END TEST nvmf_digest 00:34:12.411 ************************************ 00:34:12.411 23:04:04 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:34:12.411 23:04:04 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:34:12.411 23:04:04 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:34:12.411 23:04:04 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:34:12.411 23:04:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:12.411 23:04:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:12.411 23:04:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:12.411 ************************************ 00:34:12.411 START TEST nvmf_bdevperf 00:34:12.411 ************************************ 00:34:12.411 23:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:34:12.411 * Looking for test storage... 00:34:12.411 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:12.411 23:04:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:12.411 23:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:34:12.411 23:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:12.411 23:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:12.411 23:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:12.411 23:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:12.411 23:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:12.411 23:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:12.411 23:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:12.411 23:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:12.411 23:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:12.411 23:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:12.411 23:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:12.411 23:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:12.411 23:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:12.411 23:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:12.411 23:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:12.411 23:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:12.411 23:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:12.411 23:04:04 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:12.411 23:04:04 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:12.411 23:04:04 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:12.412 23:04:04 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.412 23:04:04 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.412 23:04:04 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.412 23:04:04 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:34:12.412 23:04:04 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.412 23:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:34:12.412 23:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:12.412 23:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:12.412 23:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:12.412 23:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:12.412 23:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:12.412 23:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:12.412 23:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:12.412 23:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:12.412 23:04:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:12.412 23:04:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:12.412 23:04:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:34:12.412 23:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:12.412 23:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:12.412 23:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:12.412 23:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:12.412 23:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:12.412 23:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:12.412 23:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:12.412 23:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:12.412 23:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:12.412 23:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:12.412 23:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:34:12.412 23:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:14.312 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:14.312 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:14.312 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:14.312 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:14.312 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:14.570 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:14.570 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:14.570 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:14.570 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:14.570 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:14.570 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:14.570 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:14.570 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:14.570 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:14.570 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:34:14.570 00:34:14.570 --- 10.0.0.2 ping statistics --- 00:34:14.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:14.570 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:34:14.570 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:14.570 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:14.570 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:34:14.570 00:34:14.570 --- 10.0.0.1 ping statistics --- 00:34:14.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:14.570 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:34:14.570 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:14.570 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:34:14.570 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:14.570 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:14.570 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:14.570 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:14.570 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:14.570 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:14.570 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:14.570 23:04:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:34:14.570 23:04:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:14.570 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:14.570 23:04:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:14.570 23:04:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:14.570 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3695503 00:34:14.570 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:14.570 23:04:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3695503 00:34:14.570 23:04:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 3695503 ']' 00:34:14.570 23:04:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:14.570 23:04:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:14.570 23:04:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:14.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:14.570 23:04:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:14.570 23:04:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:14.570 [2024-07-26 23:04:07.007826] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:34:14.570 [2024-07-26 23:04:07.007918] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:14.570 EAL: No free 2048 kB hugepages reported on node 1 00:34:14.828 [2024-07-26 23:04:07.076596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:14.828 [2024-07-26 23:04:07.172786] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:14.828 [2024-07-26 23:04:07.172843] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:14.828 [2024-07-26 23:04:07.172869] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:14.828 [2024-07-26 23:04:07.172883] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:14.828 [2024-07-26 23:04:07.172895] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:14.828 [2024-07-26 23:04:07.172981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:14.828 [2024-07-26 23:04:07.173037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:14.828 [2024-07-26 23:04:07.173040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:14.828 23:04:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:14.828 23:04:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:34:14.828 23:04:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:14.828 23:04:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:14.828 23:04:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:14.828 23:04:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:14.828 23:04:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:14.828 23:04:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.828 23:04:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:14.828 [2024-07-26 23:04:07.319372] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:14.828 23:04:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.828 23:04:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:14.828 23:04:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.828 23:04:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:15.086 Malloc0 00:34:15.086 23:04:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.086 23:04:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:15.086 23:04:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.086 23:04:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:15.086 23:04:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.086 23:04:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:15.086 23:04:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.086 23:04:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:15.086 23:04:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.086 23:04:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:15.086 23:04:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.086 23:04:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:15.086 [2024-07-26 23:04:07.388200] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:15.086 23:04:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.086 23:04:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:34:15.086 23:04:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:34:15.086 23:04:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:34:15.086 23:04:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:34:15.086 23:04:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:15.086 23:04:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:15.086 { 00:34:15.086 "params": { 00:34:15.086 "name": "Nvme$subsystem", 00:34:15.086 "trtype": "$TEST_TRANSPORT", 00:34:15.086 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:15.086 "adrfam": "ipv4", 00:34:15.086 "trsvcid": "$NVMF_PORT", 00:34:15.086 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:15.086 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:15.086 "hdgst": ${hdgst:-false}, 00:34:15.086 "ddgst": ${ddgst:-false} 00:34:15.086 }, 00:34:15.086 "method": "bdev_nvme_attach_controller" 00:34:15.086 } 00:34:15.086 EOF 00:34:15.086 )") 00:34:15.086 23:04:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:34:15.086 23:04:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:34:15.086 23:04:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:34:15.086 23:04:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:15.086 "params": { 00:34:15.086 "name": "Nvme1", 00:34:15.086 "trtype": "tcp", 00:34:15.086 "traddr": "10.0.0.2", 00:34:15.086 "adrfam": "ipv4", 00:34:15.086 "trsvcid": "4420", 00:34:15.086 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:15.086 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:15.086 "hdgst": false, 00:34:15.086 "ddgst": false 00:34:15.086 }, 00:34:15.086 "method": "bdev_nvme_attach_controller" 00:34:15.086 }' 00:34:15.086 [2024-07-26 23:04:07.436995] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:34:15.086 [2024-07-26 23:04:07.437123] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3695534 ] 00:34:15.086 EAL: No free 2048 kB hugepages reported on node 1 00:34:15.086 [2024-07-26 23:04:07.500329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:15.344 [2024-07-26 23:04:07.591195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:15.602 Running I/O for 1 seconds... 00:34:16.536 00:34:16.536 Latency(us) 00:34:16.536 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:16.536 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:16.536 Verification LBA range: start 0x0 length 0x4000 00:34:16.536 Nvme1n1 : 1.01 8957.72 34.99 0.00 0.00 14228.67 2645.71 15631.55 00:34:16.536 =================================================================================================================== 00:34:16.536 Total : 8957.72 34.99 0.00 0.00 14228.67 2645.71 15631.55 00:34:16.795 23:04:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3695790 00:34:16.795 23:04:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:34:16.795 23:04:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:34:16.795 23:04:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:34:16.795 23:04:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:34:16.795 23:04:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:34:16.795 23:04:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:16.795 23:04:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:16.795 { 00:34:16.795 "params": { 00:34:16.795 "name": "Nvme$subsystem", 00:34:16.795 "trtype": "$TEST_TRANSPORT", 00:34:16.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:16.795 "adrfam": "ipv4", 00:34:16.795 "trsvcid": "$NVMF_PORT", 00:34:16.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:16.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:16.795 "hdgst": ${hdgst:-false}, 00:34:16.795 "ddgst": ${ddgst:-false} 00:34:16.795 }, 00:34:16.795 "method": "bdev_nvme_attach_controller" 00:34:16.795 } 00:34:16.795 EOF 00:34:16.795 )") 00:34:16.795 23:04:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:34:16.795 23:04:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:34:16.795 23:04:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:34:16.795 23:04:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:16.795 "params": { 00:34:16.795 "name": "Nvme1", 00:34:16.795 "trtype": "tcp", 00:34:16.795 "traddr": "10.0.0.2", 00:34:16.795 "adrfam": "ipv4", 00:34:16.795 "trsvcid": "4420", 00:34:16.795 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:16.795 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:16.795 "hdgst": false, 00:34:16.795 "ddgst": false 00:34:16.795 }, 00:34:16.795 "method": "bdev_nvme_attach_controller" 00:34:16.795 }' 00:34:16.795 [2024-07-26 23:04:09.170230] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:34:16.795 [2024-07-26 23:04:09.170315] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3695790 ] 00:34:16.795 EAL: No free 2048 kB hugepages reported on node 1 00:34:16.795 [2024-07-26 23:04:09.230268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:17.053 [2024-07-26 23:04:09.318816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:17.053 Running I/O for 15 seconds... 00:34:20.340 23:04:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3695503 00:34:20.340 23:04:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:34:20.340 [2024-07-26 23:04:12.140867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:52320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.340 [2024-07-26 23:04:12.140918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.340 [2024-07-26 23:04:12.140949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:52328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.340 [2024-07-26 23:04:12.140966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.340 [2024-07-26 23:04:12.140984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:52336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.340 [2024-07-26 23:04:12.140998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.340 [2024-07-26 23:04:12.141015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.340 [2024-07-26 23:04:12.141031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.340 [2024-07-26 23:04:12.141075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:52352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.340 [2024-07-26 23:04:12.141093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.340 [2024-07-26 23:04:12.141111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:52360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.340 [2024-07-26 23:04:12.141127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.340 [2024-07-26 23:04:12.141152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:52368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.340 [2024-07-26 23:04:12.141170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.340 [2024-07-26 23:04:12.141204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:52376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.340 [2024-07-26 23:04:12.141220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.340 [2024-07-26 23:04:12.141236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:52384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.340 [2024-07-26 23:04:12.141252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.340 [2024-07-26 23:04:12.141268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:52392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.340 [2024-07-26 23:04:12.141283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.340 [2024-07-26 23:04:12.141299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:52400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.340 [2024-07-26 23:04:12.141312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.340 [2024-07-26 23:04:12.141328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:52408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.340 [2024-07-26 23:04:12.141358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.340 [2024-07-26 23:04:12.141376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:52416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.340 [2024-07-26 23:04:12.141392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.340 [2024-07-26 23:04:12.141409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:52424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.340 [2024-07-26 23:04:12.141424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.340 [2024-07-26 23:04:12.141441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:52432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.340 [2024-07-26 23:04:12.141457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.340 [2024-07-26 23:04:12.141475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:52440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.340 [2024-07-26 23:04:12.141490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.340 [2024-07-26 23:04:12.141508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:52448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.340 [2024-07-26 23:04:12.141523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.340 [2024-07-26 23:04:12.141541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:52456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.340 [2024-07-26 23:04:12.141557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.340 [2024-07-26 23:04:12.141574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:52464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.340 [2024-07-26 23:04:12.141591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.340 [2024-07-26 23:04:12.141613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:52472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.340 [2024-07-26 23:04:12.141630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.340 [2024-07-26 23:04:12.141647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:52480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.340 [2024-07-26 23:04:12.141663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.340 [2024-07-26 23:04:12.141680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:52488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.340 [2024-07-26 23:04:12.141696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.340 [2024-07-26 23:04:12.141713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:52496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.340 [2024-07-26 23:04:12.141729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.340 [2024-07-26 23:04:12.141746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:52504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.340 [2024-07-26 23:04:12.141762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.340 [2024-07-26 23:04:12.141779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:52512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.340 [2024-07-26 23:04:12.141795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.340 [2024-07-26 23:04:12.141813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:52520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.340 [2024-07-26 23:04:12.141828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.340 [2024-07-26 23:04:12.141845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:52528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.340 [2024-07-26 23:04:12.141861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.340 [2024-07-26 23:04:12.141878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:52536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.341 [2024-07-26 23:04:12.141894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.341 [2024-07-26 23:04:12.141911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:52544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.341 [2024-07-26 23:04:12.141927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.341 [2024-07-26 23:04:12.141945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:52552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.341 [2024-07-26 23:04:12.141960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.341 [2024-07-26 23:04:12.141978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:52560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.341 [2024-07-26 23:04:12.141993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.341 [2024-07-26 23:04:12.142011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:52568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.341 [2024-07-26 23:04:12.142031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.341 [2024-07-26 23:04:12.142049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:52576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.341 [2024-07-26 23:04:12.142074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.341 [2024-07-26 23:04:12.142093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:52584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.341 [2024-07-26 23:04:12.142109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.341 [2024-07-26 23:04:12.142140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.341 [2024-07-26 23:04:12.142154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.341 [2024-07-26 23:04:12.142170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:52600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.341 [2024-07-26 23:04:12.142184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.341 [2024-07-26 23:04:12.142199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:52608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.341 [2024-07-26 23:04:12.142213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.341 [2024-07-26 23:04:12.142228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:52616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.341 [2024-07-26 23:04:12.142242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.341 [2024-07-26 23:04:12.142258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:52696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.341 [2024-07-26 23:04:12.142272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.341 [2024-07-26 23:04:12.142288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.341 [2024-07-26 23:04:12.142301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.341 [2024-07-26 23:04:12.142316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:52712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.341 [2024-07-26 23:04:12.142330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.341 [2024-07-26 23:04:12.142363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:52720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.341 [2024-07-26 23:04:12.142376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.341 [2024-07-26 23:04:12.142390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:52728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.341 [2024-07-26 23:04:12.142417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.341 [2024-07-26 23:04:12.142436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:52736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.341 [2024-07-26 23:04:12.142452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.341 [2024-07-26 23:04:12.142473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:52744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.341 [2024-07-26 23:04:12.142489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.341 [2024-07-26 23:04:12.142507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:52752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.341 [2024-07-26 23:04:12.142523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.341 [2024-07-26 23:04:12.142540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:52760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.341 [2024-07-26 23:04:12.142555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.341 [2024-07-26 23:04:12.142573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:52768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.341 [2024-07-26 23:04:12.142589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.341 [2024-07-26 23:04:12.142606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:52776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.341 [2024-07-26 23:04:12.142622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.341 [2024-07-26 23:04:12.142639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:52784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.341 [2024-07-26 23:04:12.142655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.341 [2024-07-26 23:04:12.142673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:52792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.341 [2024-07-26 23:04:12.142688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.341 [2024-07-26 23:04:12.142707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:52800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.341 [2024-07-26 23:04:12.142723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.341 [2024-07-26 23:04:12.142740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:52808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.341 [2024-07-26 23:04:12.142756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.341 [2024-07-26 23:04:12.142774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:52816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.341 [2024-07-26 23:04:12.142790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.341 [2024-07-26 23:04:12.142807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:52824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.341 [2024-07-26 23:04:12.142823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.341 [2024-07-26 23:04:12.142840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:52832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.341 [2024-07-26 23:04:12.142856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.341 [2024-07-26 23:04:12.142873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:52840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.341 [2024-07-26 23:04:12.142892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.341 [2024-07-26 23:04:12.142911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:52848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.341 [2024-07-26 23:04:12.142927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.341 [2024-07-26 23:04:12.142945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:52856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.341 [2024-07-26 23:04:12.142960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.341 [2024-07-26 23:04:12.142978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:52864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.341 [2024-07-26 23:04:12.142993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.341 [2024-07-26 23:04:12.143010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:52872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.341 [2024-07-26 23:04:12.143026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.341 [2024-07-26 23:04:12.143043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:52880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.341 [2024-07-26 23:04:12.143065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.341 [2024-07-26 23:04:12.143084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:52888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.341 [2024-07-26 23:04:12.143100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.341 [2024-07-26 23:04:12.143131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:52896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.341 [2024-07-26 23:04:12.143145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.341 [2024-07-26 23:04:12.143160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:52904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.342 [2024-07-26 23:04:12.143174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.342 [2024-07-26 23:04:12.143188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:52912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.342 [2024-07-26 23:04:12.143202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.342 [2024-07-26 23:04:12.143218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:52624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.342 [2024-07-26 23:04:12.143232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.342 [2024-07-26 23:04:12.143247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:52632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.342 [2024-07-26 23:04:12.143261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.342 [2024-07-26 23:04:12.143276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:52640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.342 [2024-07-26 23:04:12.143290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.342 [2024-07-26 23:04:12.143305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:52648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.342 [2024-07-26 23:04:12.143322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.342 [2024-07-26 23:04:12.143338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:52656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.342 [2024-07-26 23:04:12.143367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.342 [2024-07-26 23:04:12.143382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:52664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.342 [2024-07-26 23:04:12.143394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.342 [2024-07-26 23:04:12.143424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:52672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.342 [2024-07-26 23:04:12.143440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.342 [2024-07-26 23:04:12.143457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:52680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.342 [2024-07-26 23:04:12.143473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.342 [2024-07-26 23:04:12.143491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:52920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.342 [2024-07-26 23:04:12.143506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.342 [2024-07-26 23:04:12.143523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:52928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.342 [2024-07-26 23:04:12.143539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.342 [2024-07-26 23:04:12.143556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:52936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.342 [2024-07-26 23:04:12.143571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.342 [2024-07-26 23:04:12.143589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:52944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.342 [2024-07-26 23:04:12.143604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.342 [2024-07-26 23:04:12.143621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:52952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.342 [2024-07-26 23:04:12.143637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.342 [2024-07-26 23:04:12.143655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:52960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.342 [2024-07-26 23:04:12.143671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.342 [2024-07-26 23:04:12.143688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:52968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.342 [2024-07-26 23:04:12.143703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.342 [2024-07-26 23:04:12.143721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:52976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.342 [2024-07-26 23:04:12.143736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.342 [2024-07-26 23:04:12.143758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:52984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.342 [2024-07-26 23:04:12.143774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.342 [2024-07-26 23:04:12.143792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:52992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.342 [2024-07-26 23:04:12.143808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.342 [2024-07-26 23:04:12.143826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:53000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.342 [2024-07-26 23:04:12.143841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.342 [2024-07-26 23:04:12.143859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:53008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.342 [2024-07-26 23:04:12.143874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.342 [2024-07-26 23:04:12.143892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:53016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.342 [2024-07-26 23:04:12.143907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.342 [2024-07-26 23:04:12.143925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:53024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.342 [2024-07-26 23:04:12.143940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.342 [2024-07-26 23:04:12.143958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:53032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.342 [2024-07-26 23:04:12.143974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.342 [2024-07-26 23:04:12.143991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:53040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.342 [2024-07-26 23:04:12.144007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.342 [2024-07-26 23:04:12.144025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:53048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.342 [2024-07-26 23:04:12.144040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.342 [2024-07-26 23:04:12.144064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:53056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.342 [2024-07-26 23:04:12.144082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.342 [2024-07-26 23:04:12.144116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:53064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.342 [2024-07-26 23:04:12.144131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.342 [2024-07-26 23:04:12.144147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:53072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.342 [2024-07-26 23:04:12.144161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.342 [2024-07-26 23:04:12.144177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:53080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.342 [2024-07-26 23:04:12.144198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.342 [2024-07-26 23:04:12.144215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.342 [2024-07-26 23:04:12.144229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.342 [2024-07-26 23:04:12.144245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:53096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.342 [2024-07-26 23:04:12.144259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.342 [2024-07-26 23:04:12.144274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:53104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.342 [2024-07-26 23:04:12.144288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.343 [2024-07-26 23:04:12.144304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:53112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.343 [2024-07-26 23:04:12.144318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.343 [2024-07-26 23:04:12.144334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:53120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.343 [2024-07-26 23:04:12.144364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.343 [2024-07-26 23:04:12.144379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:53128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.343 [2024-07-26 23:04:12.144392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.343 [2024-07-26 23:04:12.144406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:53136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.343 [2024-07-26 23:04:12.144437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.343 [2024-07-26 23:04:12.144455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:53144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.343 [2024-07-26 23:04:12.144471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.343 [2024-07-26 23:04:12.144488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:53152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.343 [2024-07-26 23:04:12.144506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.343 [2024-07-26 23:04:12.144523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:53160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.343 [2024-07-26 23:04:12.144539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.343 [2024-07-26 23:04:12.144557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:53168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.343 [2024-07-26 23:04:12.144574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.343 [2024-07-26 23:04:12.144592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:53176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.343 [2024-07-26 23:04:12.144608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.343 [2024-07-26 23:04:12.144629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:53184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.343 [2024-07-26 23:04:12.144646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.343 [2024-07-26 23:04:12.144663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:53192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.343 [2024-07-26 23:04:12.144679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.343 [2024-07-26 23:04:12.144696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:53200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.343 [2024-07-26 23:04:12.144712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.343 [2024-07-26 23:04:12.144730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:53208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.343 [2024-07-26 23:04:12.144745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.343 [2024-07-26 23:04:12.144763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:53216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.343 [2024-07-26 23:04:12.144779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.343 [2024-07-26 23:04:12.144797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:53224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.343 [2024-07-26 23:04:12.144813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.343 [2024-07-26 23:04:12.144831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:53232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.343 [2024-07-26 23:04:12.144847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.343 [2024-07-26 23:04:12.144864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.343 [2024-07-26 23:04:12.144880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.343 [2024-07-26 23:04:12.144898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:53248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.343 [2024-07-26 23:04:12.144914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.343 [2024-07-26 23:04:12.144932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:53256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.343 [2024-07-26 23:04:12.144947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.343 [2024-07-26 23:04:12.144965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:53264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.343 [2024-07-26 23:04:12.144980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.343 [2024-07-26 23:04:12.144998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:53272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.343 [2024-07-26 23:04:12.145014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.343 [2024-07-26 23:04:12.145031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:53280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.343 [2024-07-26 23:04:12.145048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.343 [2024-07-26 23:04:12.145078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.343 [2024-07-26 23:04:12.145095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.343 [2024-07-26 23:04:12.145127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:53296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.343 [2024-07-26 23:04:12.145142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.343 [2024-07-26 23:04:12.145157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:53304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.343 [2024-07-26 23:04:12.145171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.343 [2024-07-26 23:04:12.145186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:53312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.343 [2024-07-26 23:04:12.145201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.343 [2024-07-26 23:04:12.145216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:53320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.343 [2024-07-26 23:04:12.145230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.343 [2024-07-26 23:04:12.145245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:53328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.343 [2024-07-26 23:04:12.145259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.343 [2024-07-26 23:04:12.145275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:53336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.343 [2024-07-26 23:04:12.145288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.343 [2024-07-26 23:04:12.145304] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156b150 is same with the state(5) to be set 00:34:20.343 [2024-07-26 23:04:12.145322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:20.343 [2024-07-26 23:04:12.145334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:20.343 [2024-07-26 23:04:12.145361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52688 len:8 PRP1 0x0 PRP2 0x0 00:34:20.343 [2024-07-26 23:04:12.145374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.343 [2024-07-26 23:04:12.145452] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x156b150 was disconnected and freed. reset controller. 00:34:20.343 [2024-07-26 23:04:12.149235] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.343 [2024-07-26 23:04:12.149319] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.343 [2024-07-26 23:04:12.150070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.343 [2024-07-26 23:04:12.150119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.343 [2024-07-26 23:04:12.150138] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.343 [2024-07-26 23:04:12.150373] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.343 [2024-07-26 23:04:12.150619] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.343 [2024-07-26 23:04:12.150650] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.343 [2024-07-26 23:04:12.150671] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.343 [2024-07-26 23:04:12.154242] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.343 [2024-07-26 23:04:12.163520] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.343 [2024-07-26 23:04:12.164000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.344 [2024-07-26 23:04:12.164028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.344 [2024-07-26 23:04:12.164045] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.344 [2024-07-26 23:04:12.164328] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.344 [2024-07-26 23:04:12.164574] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.344 [2024-07-26 23:04:12.164600] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.344 [2024-07-26 23:04:12.164616] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.344 [2024-07-26 23:04:12.168201] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.344 [2024-07-26 23:04:12.177528] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.344 [2024-07-26 23:04:12.177964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.344 [2024-07-26 23:04:12.177997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.344 [2024-07-26 23:04:12.178016] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.344 [2024-07-26 23:04:12.178266] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.344 [2024-07-26 23:04:12.178511] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.344 [2024-07-26 23:04:12.178537] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.344 [2024-07-26 23:04:12.178553] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.344 [2024-07-26 23:04:12.182143] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.344 [2024-07-26 23:04:12.191436] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.344 [2024-07-26 23:04:12.191888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.344 [2024-07-26 23:04:12.191919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.344 [2024-07-26 23:04:12.191938] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.344 [2024-07-26 23:04:12.192188] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.344 [2024-07-26 23:04:12.192433] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.344 [2024-07-26 23:04:12.192459] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.344 [2024-07-26 23:04:12.192475] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.344 [2024-07-26 23:04:12.196051] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.344 [2024-07-26 23:04:12.205363] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.344 [2024-07-26 23:04:12.205818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.344 [2024-07-26 23:04:12.205850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.344 [2024-07-26 23:04:12.205868] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.344 [2024-07-26 23:04:12.206118] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.344 [2024-07-26 23:04:12.206363] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.344 [2024-07-26 23:04:12.206387] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.344 [2024-07-26 23:04:12.206404] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.344 [2024-07-26 23:04:12.209981] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.344 [2024-07-26 23:04:12.219277] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.344 [2024-07-26 23:04:12.219731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.344 [2024-07-26 23:04:12.219763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.344 [2024-07-26 23:04:12.219782] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.344 [2024-07-26 23:04:12.220021] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.344 [2024-07-26 23:04:12.220276] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.344 [2024-07-26 23:04:12.220302] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.344 [2024-07-26 23:04:12.220319] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.344 [2024-07-26 23:04:12.223892] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.344 [2024-07-26 23:04:12.233181] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.344 [2024-07-26 23:04:12.233629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.344 [2024-07-26 23:04:12.233661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.344 [2024-07-26 23:04:12.233679] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.344 [2024-07-26 23:04:12.233918] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.344 [2024-07-26 23:04:12.234174] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.344 [2024-07-26 23:04:12.234199] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.344 [2024-07-26 23:04:12.234216] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.344 [2024-07-26 23:04:12.237817] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.344 [2024-07-26 23:04:12.247116] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.344 [2024-07-26 23:04:12.247581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.344 [2024-07-26 23:04:12.247609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.344 [2024-07-26 23:04:12.247630] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.344 [2024-07-26 23:04:12.247882] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.344 [2024-07-26 23:04:12.248138] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.344 [2024-07-26 23:04:12.248164] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.344 [2024-07-26 23:04:12.248181] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.344 [2024-07-26 23:04:12.251759] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.344 [2024-07-26 23:04:12.261053] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.344 [2024-07-26 23:04:12.261521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.344 [2024-07-26 23:04:12.261553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.344 [2024-07-26 23:04:12.261571] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.344 [2024-07-26 23:04:12.261810] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.344 [2024-07-26 23:04:12.262054] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.344 [2024-07-26 23:04:12.262088] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.344 [2024-07-26 23:04:12.262105] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.344 [2024-07-26 23:04:12.265679] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.344 [2024-07-26 23:04:12.274967] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.344 [2024-07-26 23:04:12.275405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.344 [2024-07-26 23:04:12.275438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.344 [2024-07-26 23:04:12.275457] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.344 [2024-07-26 23:04:12.275697] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.344 [2024-07-26 23:04:12.275941] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.344 [2024-07-26 23:04:12.275966] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.344 [2024-07-26 23:04:12.275982] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.344 [2024-07-26 23:04:12.279570] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.344 [2024-07-26 23:04:12.288859] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.344 [2024-07-26 23:04:12.289317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.344 [2024-07-26 23:04:12.289350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.344 [2024-07-26 23:04:12.289368] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.345 [2024-07-26 23:04:12.289607] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.345 [2024-07-26 23:04:12.289852] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.345 [2024-07-26 23:04:12.289881] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.345 [2024-07-26 23:04:12.289899] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.345 [2024-07-26 23:04:12.293483] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.345 [2024-07-26 23:04:12.302769] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.345 [2024-07-26 23:04:12.303224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.345 [2024-07-26 23:04:12.303253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.345 [2024-07-26 23:04:12.303269] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.345 [2024-07-26 23:04:12.303521] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.345 [2024-07-26 23:04:12.303767] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.345 [2024-07-26 23:04:12.303792] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.345 [2024-07-26 23:04:12.303808] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.345 [2024-07-26 23:04:12.307392] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.345 [2024-07-26 23:04:12.316680] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.345 [2024-07-26 23:04:12.317135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.345 [2024-07-26 23:04:12.317164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.345 [2024-07-26 23:04:12.317181] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.345 [2024-07-26 23:04:12.317442] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.345 [2024-07-26 23:04:12.317686] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.345 [2024-07-26 23:04:12.317712] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.345 [2024-07-26 23:04:12.317728] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.345 [2024-07-26 23:04:12.321312] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.345 [2024-07-26 23:04:12.330598] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.345 [2024-07-26 23:04:12.331023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.345 [2024-07-26 23:04:12.331055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.345 [2024-07-26 23:04:12.331084] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.345 [2024-07-26 23:04:12.331324] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.345 [2024-07-26 23:04:12.331569] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.345 [2024-07-26 23:04:12.331593] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.345 [2024-07-26 23:04:12.331610] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.345 [2024-07-26 23:04:12.335192] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.345 [2024-07-26 23:04:12.344483] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.345 [2024-07-26 23:04:12.344948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.345 [2024-07-26 23:04:12.344980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.345 [2024-07-26 23:04:12.344998] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.345 [2024-07-26 23:04:12.345248] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.345 [2024-07-26 23:04:12.345493] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.345 [2024-07-26 23:04:12.345518] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.345 [2024-07-26 23:04:12.345534] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.345 [2024-07-26 23:04:12.349113] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.345 [2024-07-26 23:04:12.358405] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.345 [2024-07-26 23:04:12.358834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.345 [2024-07-26 23:04:12.358866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.345 [2024-07-26 23:04:12.358884] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.345 [2024-07-26 23:04:12.359136] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.345 [2024-07-26 23:04:12.359381] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.345 [2024-07-26 23:04:12.359406] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.345 [2024-07-26 23:04:12.359423] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.345 [2024-07-26 23:04:12.362994] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.345 [2024-07-26 23:04:12.372291] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.345 [2024-07-26 23:04:12.372737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.345 [2024-07-26 23:04:12.372769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.345 [2024-07-26 23:04:12.372787] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.345 [2024-07-26 23:04:12.373027] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.345 [2024-07-26 23:04:12.373281] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.345 [2024-07-26 23:04:12.373307] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.345 [2024-07-26 23:04:12.373323] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.345 [2024-07-26 23:04:12.376901] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.345 [2024-07-26 23:04:12.386200] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.345 [2024-07-26 23:04:12.386653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.346 [2024-07-26 23:04:12.386680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.346 [2024-07-26 23:04:12.386696] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.346 [2024-07-26 23:04:12.386950] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.346 [2024-07-26 23:04:12.387206] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.346 [2024-07-26 23:04:12.387231] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.346 [2024-07-26 23:04:12.387248] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.346 [2024-07-26 23:04:12.390820] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.346 [2024-07-26 23:04:12.400118] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.346 [2024-07-26 23:04:12.400561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.346 [2024-07-26 23:04:12.400593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.346 [2024-07-26 23:04:12.400612] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.346 [2024-07-26 23:04:12.400851] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.346 [2024-07-26 23:04:12.401107] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.346 [2024-07-26 23:04:12.401132] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.346 [2024-07-26 23:04:12.401149] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.346 [2024-07-26 23:04:12.404722] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.346 [2024-07-26 23:04:12.414022] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.346 [2024-07-26 23:04:12.414457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.346 [2024-07-26 23:04:12.414490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.346 [2024-07-26 23:04:12.414509] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.346 [2024-07-26 23:04:12.414749] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.346 [2024-07-26 23:04:12.414994] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.346 [2024-07-26 23:04:12.415018] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.346 [2024-07-26 23:04:12.415034] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.346 [2024-07-26 23:04:12.418627] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.346 [2024-07-26 23:04:12.427929] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.346 [2024-07-26 23:04:12.428367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.346 [2024-07-26 23:04:12.428399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.346 [2024-07-26 23:04:12.428418] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.346 [2024-07-26 23:04:12.428659] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.346 [2024-07-26 23:04:12.428903] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.346 [2024-07-26 23:04:12.428928] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.346 [2024-07-26 23:04:12.428949] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.346 [2024-07-26 23:04:12.432538] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.346 [2024-07-26 23:04:12.441835] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.346 [2024-07-26 23:04:12.442257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.346 [2024-07-26 23:04:12.442286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.346 [2024-07-26 23:04:12.442302] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.346 [2024-07-26 23:04:12.442557] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.346 [2024-07-26 23:04:12.442801] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.346 [2024-07-26 23:04:12.442826] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.346 [2024-07-26 23:04:12.442843] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.346 [2024-07-26 23:04:12.446428] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.346 [2024-07-26 23:04:12.455713] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.346 [2024-07-26 23:04:12.456153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.346 [2024-07-26 23:04:12.456181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.346 [2024-07-26 23:04:12.456206] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.346 [2024-07-26 23:04:12.456448] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.346 [2024-07-26 23:04:12.456693] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.346 [2024-07-26 23:04:12.456717] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.346 [2024-07-26 23:04:12.456734] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.346 [2024-07-26 23:04:12.460317] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.346 [2024-07-26 23:04:12.469630] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.346 [2024-07-26 23:04:12.470094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.346 [2024-07-26 23:04:12.470126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.346 [2024-07-26 23:04:12.470145] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.346 [2024-07-26 23:04:12.470385] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.346 [2024-07-26 23:04:12.470629] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.346 [2024-07-26 23:04:12.470654] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.346 [2024-07-26 23:04:12.470671] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.346 [2024-07-26 23:04:12.474251] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.346 [2024-07-26 23:04:12.483566] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.346 [2024-07-26 23:04:12.484030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.346 [2024-07-26 23:04:12.484078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.346 [2024-07-26 23:04:12.484100] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.346 [2024-07-26 23:04:12.484343] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.346 [2024-07-26 23:04:12.484590] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.346 [2024-07-26 23:04:12.484615] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.346 [2024-07-26 23:04:12.484631] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.346 [2024-07-26 23:04:12.488212] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.346 [2024-07-26 23:04:12.497505] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.346 [2024-07-26 23:04:12.497954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.346 [2024-07-26 23:04:12.497985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.346 [2024-07-26 23:04:12.498003] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.346 [2024-07-26 23:04:12.498255] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.346 [2024-07-26 23:04:12.498500] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.346 [2024-07-26 23:04:12.498525] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.346 [2024-07-26 23:04:12.498543] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.346 [2024-07-26 23:04:12.502124] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.346 [2024-07-26 23:04:12.511429] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.346 [2024-07-26 23:04:12.511883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.346 [2024-07-26 23:04:12.511915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.347 [2024-07-26 23:04:12.511945] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.347 [2024-07-26 23:04:12.512195] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.347 [2024-07-26 23:04:12.512440] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.347 [2024-07-26 23:04:12.512465] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.347 [2024-07-26 23:04:12.512483] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.347 [2024-07-26 23:04:12.516064] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.347 [2024-07-26 23:04:12.525355] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.347 [2024-07-26 23:04:12.525967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.347 [2024-07-26 23:04:12.526028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.347 [2024-07-26 23:04:12.526056] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.347 [2024-07-26 23:04:12.526304] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.347 [2024-07-26 23:04:12.526556] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.347 [2024-07-26 23:04:12.526581] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.347 [2024-07-26 23:04:12.526598] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.347 [2024-07-26 23:04:12.530178] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.347 [2024-07-26 23:04:12.539255] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.347 [2024-07-26 23:04:12.539690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.347 [2024-07-26 23:04:12.539717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.347 [2024-07-26 23:04:12.539743] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.347 [2024-07-26 23:04:12.539970] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.347 [2024-07-26 23:04:12.540190] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.347 [2024-07-26 23:04:12.540211] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.347 [2024-07-26 23:04:12.540225] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.347 [2024-07-26 23:04:12.543253] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.347 [2024-07-26 23:04:12.553215] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.347 [2024-07-26 23:04:12.553639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.347 [2024-07-26 23:04:12.553671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.347 [2024-07-26 23:04:12.553700] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.347 [2024-07-26 23:04:12.553939] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.347 [2024-07-26 23:04:12.554194] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.347 [2024-07-26 23:04:12.554219] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.347 [2024-07-26 23:04:12.554236] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.347 [2024-07-26 23:04:12.557809] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.347 [2024-07-26 23:04:12.567038] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.347 [2024-07-26 23:04:12.567481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.347 [2024-07-26 23:04:12.567513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.347 [2024-07-26 23:04:12.567532] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.347 [2024-07-26 23:04:12.567775] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.347 [2024-07-26 23:04:12.568019] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.347 [2024-07-26 23:04:12.568055] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.347 [2024-07-26 23:04:12.568084] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.347 [2024-07-26 23:04:12.571636] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.347 [2024-07-26 23:04:12.581109] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.347 [2024-07-26 23:04:12.581570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.347 [2024-07-26 23:04:12.581601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.347 [2024-07-26 23:04:12.581619] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.347 [2024-07-26 23:04:12.581859] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.347 [2024-07-26 23:04:12.582114] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.347 [2024-07-26 23:04:12.582139] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.347 [2024-07-26 23:04:12.582157] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.347 [2024-07-26 23:04:12.585731] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.347 [2024-07-26 23:04:12.595022] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.347 [2024-07-26 23:04:12.595480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.347 [2024-07-26 23:04:12.595512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.347 [2024-07-26 23:04:12.595531] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.347 [2024-07-26 23:04:12.595769] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.347 [2024-07-26 23:04:12.596014] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.347 [2024-07-26 23:04:12.596039] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.347 [2024-07-26 23:04:12.596055] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.347 [2024-07-26 23:04:12.599645] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.347 [2024-07-26 23:04:12.608936] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.347 [2024-07-26 23:04:12.609365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.347 [2024-07-26 23:04:12.609397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.347 [2024-07-26 23:04:12.609416] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.347 [2024-07-26 23:04:12.609655] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.347 [2024-07-26 23:04:12.609899] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.347 [2024-07-26 23:04:12.609924] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.347 [2024-07-26 23:04:12.609940] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.347 [2024-07-26 23:04:12.613523] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.347 [2024-07-26 23:04:12.622821] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.347 [2024-07-26 23:04:12.623259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.347 [2024-07-26 23:04:12.623291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.347 [2024-07-26 23:04:12.623314] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.347 [2024-07-26 23:04:12.623554] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.347 [2024-07-26 23:04:12.623798] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.347 [2024-07-26 23:04:12.623823] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.347 [2024-07-26 23:04:12.623840] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.347 [2024-07-26 23:04:12.627431] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.347 [2024-07-26 23:04:12.636721] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.347 [2024-07-26 23:04:12.637181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.347 [2024-07-26 23:04:12.637213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.347 [2024-07-26 23:04:12.637231] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.347 [2024-07-26 23:04:12.637471] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.347 [2024-07-26 23:04:12.637715] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.347 [2024-07-26 23:04:12.637740] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.347 [2024-07-26 23:04:12.637757] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.348 [2024-07-26 23:04:12.641341] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.348 [2024-07-26 23:04:12.650625] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.348 [2024-07-26 23:04:12.651073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.348 [2024-07-26 23:04:12.651113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.348 [2024-07-26 23:04:12.651132] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.348 [2024-07-26 23:04:12.651371] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.348 [2024-07-26 23:04:12.651614] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.348 [2024-07-26 23:04:12.651639] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.348 [2024-07-26 23:04:12.651655] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.348 [2024-07-26 23:04:12.655240] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.348 [2024-07-26 23:04:12.664529] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.348 [2024-07-26 23:04:12.664985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.348 [2024-07-26 23:04:12.665012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.348 [2024-07-26 23:04:12.665029] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.348 [2024-07-26 23:04:12.665301] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.348 [2024-07-26 23:04:12.665546] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.348 [2024-07-26 23:04:12.665588] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.348 [2024-07-26 23:04:12.665605] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.348 [2024-07-26 23:04:12.669194] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.348 [2024-07-26 23:04:12.678482] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.348 [2024-07-26 23:04:12.678947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.348 [2024-07-26 23:04:12.678979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.348 [2024-07-26 23:04:12.678998] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.348 [2024-07-26 23:04:12.679253] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.348 [2024-07-26 23:04:12.679498] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.348 [2024-07-26 23:04:12.679523] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.348 [2024-07-26 23:04:12.679539] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.348 [2024-07-26 23:04:12.683122] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.348 [2024-07-26 23:04:12.692413] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.348 [2024-07-26 23:04:12.692862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.348 [2024-07-26 23:04:12.692893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.348 [2024-07-26 23:04:12.692911] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.348 [2024-07-26 23:04:12.693161] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.348 [2024-07-26 23:04:12.693405] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.348 [2024-07-26 23:04:12.693430] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.348 [2024-07-26 23:04:12.693447] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.348 [2024-07-26 23:04:12.697022] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.348 [2024-07-26 23:04:12.706320] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.348 [2024-07-26 23:04:12.706740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.348 [2024-07-26 23:04:12.706771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.348 [2024-07-26 23:04:12.706792] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.348 [2024-07-26 23:04:12.707030] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.348 [2024-07-26 23:04:12.707284] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.348 [2024-07-26 23:04:12.707309] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.348 [2024-07-26 23:04:12.707326] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.348 [2024-07-26 23:04:12.710902] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.348 [2024-07-26 23:04:12.720207] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.348 [2024-07-26 23:04:12.720655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.348 [2024-07-26 23:04:12.720682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.348 [2024-07-26 23:04:12.720699] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.348 [2024-07-26 23:04:12.720939] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.348 [2024-07-26 23:04:12.721193] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.348 [2024-07-26 23:04:12.721215] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.348 [2024-07-26 23:04:12.721230] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.348 [2024-07-26 23:04:12.724802] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.348 [2024-07-26 23:04:12.734109] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.348 [2024-07-26 23:04:12.734549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.348 [2024-07-26 23:04:12.734576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.348 [2024-07-26 23:04:12.734592] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.348 [2024-07-26 23:04:12.734826] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.348 [2024-07-26 23:04:12.735082] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.348 [2024-07-26 23:04:12.735108] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.348 [2024-07-26 23:04:12.735124] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.348 [2024-07-26 23:04:12.738699] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.348 [2024-07-26 23:04:12.747998] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.348 [2024-07-26 23:04:12.748424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.348 [2024-07-26 23:04:12.748457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.348 [2024-07-26 23:04:12.748477] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.348 [2024-07-26 23:04:12.748717] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.348 [2024-07-26 23:04:12.748962] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.348 [2024-07-26 23:04:12.748988] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.348 [2024-07-26 23:04:12.749005] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.348 [2024-07-26 23:04:12.752599] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.348 [2024-07-26 23:04:12.761898] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.348 [2024-07-26 23:04:12.762349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.348 [2024-07-26 23:04:12.762382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.348 [2024-07-26 23:04:12.762401] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.348 [2024-07-26 23:04:12.762647] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.348 [2024-07-26 23:04:12.762892] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.348 [2024-07-26 23:04:12.762918] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.348 [2024-07-26 23:04:12.762935] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.348 [2024-07-26 23:04:12.766522] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.348 [2024-07-26 23:04:12.775810] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.349 [2024-07-26 23:04:12.776275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.349 [2024-07-26 23:04:12.776305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.349 [2024-07-26 23:04:12.776323] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.349 [2024-07-26 23:04:12.776591] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.349 [2024-07-26 23:04:12.776837] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.349 [2024-07-26 23:04:12.776862] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.349 [2024-07-26 23:04:12.776879] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.349 [2024-07-26 23:04:12.780473] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.349 [2024-07-26 23:04:12.789779] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.349 [2024-07-26 23:04:12.790238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.349 [2024-07-26 23:04:12.790268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.349 [2024-07-26 23:04:12.790285] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.349 [2024-07-26 23:04:12.790531] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.349 [2024-07-26 23:04:12.790777] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.349 [2024-07-26 23:04:12.790803] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.349 [2024-07-26 23:04:12.790820] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.349 [2024-07-26 23:04:12.794411] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.349 [2024-07-26 23:04:12.803713] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.349 [2024-07-26 23:04:12.804177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.349 [2024-07-26 23:04:12.804206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.349 [2024-07-26 23:04:12.804224] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.349 [2024-07-26 23:04:12.804476] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.349 [2024-07-26 23:04:12.804722] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.349 [2024-07-26 23:04:12.804748] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.349 [2024-07-26 23:04:12.804770] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.349 [2024-07-26 23:04:12.808358] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.349 [2024-07-26 23:04:12.817659] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.349 [2024-07-26 23:04:12.818112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.349 [2024-07-26 23:04:12.818145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.349 [2024-07-26 23:04:12.818165] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.349 [2024-07-26 23:04:12.818405] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.349 [2024-07-26 23:04:12.818651] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.349 [2024-07-26 23:04:12.818677] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.349 [2024-07-26 23:04:12.818695] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.349 [2024-07-26 23:04:12.822284] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.349 [2024-07-26 23:04:12.831578] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.349 [2024-07-26 23:04:12.832038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.349 [2024-07-26 23:04:12.832074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.349 [2024-07-26 23:04:12.832093] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.349 [2024-07-26 23:04:12.832345] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.349 [2024-07-26 23:04:12.832592] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.349 [2024-07-26 23:04:12.832618] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.349 [2024-07-26 23:04:12.832634] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.349 [2024-07-26 23:04:12.836220] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.608 [2024-07-26 23:04:12.845519] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.609 [2024-07-26 23:04:12.845943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-26 23:04:12.845976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.609 [2024-07-26 23:04:12.845996] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.609 [2024-07-26 23:04:12.846249] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.609 [2024-07-26 23:04:12.846494] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.609 [2024-07-26 23:04:12.846520] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.609 [2024-07-26 23:04:12.846537] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.609 [2024-07-26 23:04:12.850120] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.609 [2024-07-26 23:04:12.859427] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.609 [2024-07-26 23:04:12.859888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-26 23:04:12.859917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.609 [2024-07-26 23:04:12.859933] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.609 [2024-07-26 23:04:12.860197] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.609 [2024-07-26 23:04:12.860443] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.609 [2024-07-26 23:04:12.860469] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.609 [2024-07-26 23:04:12.860486] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.609 [2024-07-26 23:04:12.864074] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.609 [2024-07-26 23:04:12.873373] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.609 [2024-07-26 23:04:12.873819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-26 23:04:12.873851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.609 [2024-07-26 23:04:12.873869] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.609 [2024-07-26 23:04:12.874121] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.609 [2024-07-26 23:04:12.874365] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.609 [2024-07-26 23:04:12.874392] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.609 [2024-07-26 23:04:12.874408] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.609 [2024-07-26 23:04:12.878004] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.609 [2024-07-26 23:04:12.887311] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.609 [2024-07-26 23:04:12.887779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-26 23:04:12.887812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.609 [2024-07-26 23:04:12.887830] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.609 [2024-07-26 23:04:12.888082] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.609 [2024-07-26 23:04:12.888327] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.609 [2024-07-26 23:04:12.888353] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.609 [2024-07-26 23:04:12.888370] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.609 [2024-07-26 23:04:12.891948] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.609 [2024-07-26 23:04:12.901252] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.609 [2024-07-26 23:04:12.901701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-26 23:04:12.901733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.609 [2024-07-26 23:04:12.901752] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.609 [2024-07-26 23:04:12.901996] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.609 [2024-07-26 23:04:12.902254] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.609 [2024-07-26 23:04:12.902281] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.609 [2024-07-26 23:04:12.902298] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.609 [2024-07-26 23:04:12.905880] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.609 [2024-07-26 23:04:12.915203] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.609 [2024-07-26 23:04:12.915650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-26 23:04:12.915683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.609 [2024-07-26 23:04:12.915701] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.609 [2024-07-26 23:04:12.915941] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.609 [2024-07-26 23:04:12.916198] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.609 [2024-07-26 23:04:12.916224] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.609 [2024-07-26 23:04:12.916242] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.609 [2024-07-26 23:04:12.919818] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.609 [2024-07-26 23:04:12.929125] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.609 [2024-07-26 23:04:12.929574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-26 23:04:12.929606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.609 [2024-07-26 23:04:12.929625] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.609 [2024-07-26 23:04:12.929864] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.609 [2024-07-26 23:04:12.930119] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.609 [2024-07-26 23:04:12.930146] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.609 [2024-07-26 23:04:12.930164] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.609 [2024-07-26 23:04:12.933746] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.609 [2024-07-26 23:04:12.943049] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.609 [2024-07-26 23:04:12.943510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-26 23:04:12.943538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.609 [2024-07-26 23:04:12.943554] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.609 [2024-07-26 23:04:12.943798] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.609 [2024-07-26 23:04:12.944047] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.609 [2024-07-26 23:04:12.944086] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.609 [2024-07-26 23:04:12.944109] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.609 [2024-07-26 23:04:12.947692] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.609 [2024-07-26 23:04:12.956991] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.609 [2024-07-26 23:04:12.957432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-26 23:04:12.957464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.609 [2024-07-26 23:04:12.957483] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.609 [2024-07-26 23:04:12.957722] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.609 [2024-07-26 23:04:12.957966] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.609 [2024-07-26 23:04:12.957991] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.609 [2024-07-26 23:04:12.958007] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.609 [2024-07-26 23:04:12.961598] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.609 [2024-07-26 23:04:12.970901] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.609 [2024-07-26 23:04:12.971336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-26 23:04:12.971369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.609 [2024-07-26 23:04:12.971388] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.609 [2024-07-26 23:04:12.971628] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.609 [2024-07-26 23:04:12.971874] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.610 [2024-07-26 23:04:12.971900] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.610 [2024-07-26 23:04:12.971916] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.610 [2024-07-26 23:04:12.975504] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.610 [2024-07-26 23:04:12.984807] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.610 [2024-07-26 23:04:12.985272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-26 23:04:12.985305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.610 [2024-07-26 23:04:12.985325] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.610 [2024-07-26 23:04:12.985565] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.610 [2024-07-26 23:04:12.985809] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.610 [2024-07-26 23:04:12.985834] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.610 [2024-07-26 23:04:12.985851] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.610 [2024-07-26 23:04:12.989445] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.610 [2024-07-26 23:04:12.998758] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.610 [2024-07-26 23:04:12.999185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-26 23:04:12.999223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.610 [2024-07-26 23:04:12.999243] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.610 [2024-07-26 23:04:12.999484] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.610 [2024-07-26 23:04:12.999728] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.610 [2024-07-26 23:04:12.999755] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.610 [2024-07-26 23:04:12.999771] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.610 [2024-07-26 23:04:13.003361] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.610 [2024-07-26 23:04:13.012659] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.610 [2024-07-26 23:04:13.013093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-26 23:04:13.013121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.610 [2024-07-26 23:04:13.013137] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.610 [2024-07-26 23:04:13.013373] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.610 [2024-07-26 23:04:13.013618] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.610 [2024-07-26 23:04:13.013644] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.610 [2024-07-26 23:04:13.013662] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.610 [2024-07-26 23:04:13.017253] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.610 [2024-07-26 23:04:13.026548] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.610 [2024-07-26 23:04:13.026999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-26 23:04:13.027032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.610 [2024-07-26 23:04:13.027051] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.610 [2024-07-26 23:04:13.027300] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.610 [2024-07-26 23:04:13.027545] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.610 [2024-07-26 23:04:13.027571] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.610 [2024-07-26 23:04:13.027588] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.610 [2024-07-26 23:04:13.031173] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.610 [2024-07-26 23:04:13.040468] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.610 [2024-07-26 23:04:13.040918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-26 23:04:13.040950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.610 [2024-07-26 23:04:13.040968] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.610 [2024-07-26 23:04:13.041218] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.610 [2024-07-26 23:04:13.041469] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.610 [2024-07-26 23:04:13.041496] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.610 [2024-07-26 23:04:13.041513] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.610 [2024-07-26 23:04:13.045097] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.610 [2024-07-26 23:04:13.054389] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.610 [2024-07-26 23:04:13.054812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-26 23:04:13.054845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.610 [2024-07-26 23:04:13.054863] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.610 [2024-07-26 23:04:13.055116] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.610 [2024-07-26 23:04:13.055361] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.610 [2024-07-26 23:04:13.055387] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.610 [2024-07-26 23:04:13.055404] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.610 [2024-07-26 23:04:13.058981] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.610 [2024-07-26 23:04:13.068287] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.610 [2024-07-26 23:04:13.068714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-26 23:04:13.068746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.610 [2024-07-26 23:04:13.068764] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.610 [2024-07-26 23:04:13.069003] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.610 [2024-07-26 23:04:13.069260] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.610 [2024-07-26 23:04:13.069287] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.610 [2024-07-26 23:04:13.069304] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.610 [2024-07-26 23:04:13.072881] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.610 [2024-07-26 23:04:13.082199] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.610 [2024-07-26 23:04:13.082631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-26 23:04:13.082665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.610 [2024-07-26 23:04:13.082685] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.610 [2024-07-26 23:04:13.082926] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.610 [2024-07-26 23:04:13.083184] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.610 [2024-07-26 23:04:13.083211] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.610 [2024-07-26 23:04:13.083228] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.610 [2024-07-26 23:04:13.086815] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.610 [2024-07-26 23:04:13.096135] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.610 [2024-07-26 23:04:13.096584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-26 23:04:13.096616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.610 [2024-07-26 23:04:13.096635] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.610 [2024-07-26 23:04:13.096874] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.610 [2024-07-26 23:04:13.097131] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.610 [2024-07-26 23:04:13.097157] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.611 [2024-07-26 23:04:13.097174] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.611 [2024-07-26 23:04:13.100756] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.611 [2024-07-26 23:04:13.110097] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.872 [2024-07-26 23:04:13.110519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.872 [2024-07-26 23:04:13.110551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.872 [2024-07-26 23:04:13.110570] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.872 [2024-07-26 23:04:13.110811] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.872 [2024-07-26 23:04:13.111056] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.872 [2024-07-26 23:04:13.111094] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.872 [2024-07-26 23:04:13.111122] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.872 [2024-07-26 23:04:13.114703] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.872 [2024-07-26 23:04:13.124016] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.872 [2024-07-26 23:04:13.124500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.872 [2024-07-26 23:04:13.124528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.872 [2024-07-26 23:04:13.124545] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.872 [2024-07-26 23:04:13.124795] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.872 [2024-07-26 23:04:13.125032] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.872 [2024-07-26 23:04:13.125053] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.872 [2024-07-26 23:04:13.125090] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.872 [2024-07-26 23:04:13.128690] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.872 [2024-07-26 23:04:13.137801] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.872 [2024-07-26 23:04:13.138254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.872 [2024-07-26 23:04:13.138283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.872 [2024-07-26 23:04:13.138306] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.872 [2024-07-26 23:04:13.138550] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.872 [2024-07-26 23:04:13.138809] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.872 [2024-07-26 23:04:13.138835] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.872 [2024-07-26 23:04:13.138852] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.872 [2024-07-26 23:04:13.142470] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.872 [2024-07-26 23:04:13.151742] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.872 [2024-07-26 23:04:13.152204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.872 [2024-07-26 23:04:13.152235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.872 [2024-07-26 23:04:13.152253] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.872 [2024-07-26 23:04:13.152495] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.872 [2024-07-26 23:04:13.152738] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.872 [2024-07-26 23:04:13.152765] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.872 [2024-07-26 23:04:13.152782] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.872 [2024-07-26 23:04:13.156407] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.872 [2024-07-26 23:04:13.165905] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.872 [2024-07-26 23:04:13.166428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.872 [2024-07-26 23:04:13.166462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.872 [2024-07-26 23:04:13.166481] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.872 [2024-07-26 23:04:13.166750] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.872 [2024-07-26 23:04:13.166951] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.872 [2024-07-26 23:04:13.166972] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.872 [2024-07-26 23:04:13.166986] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.872 [2024-07-26 23:04:13.170587] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.872 [2024-07-26 23:04:13.179631] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.872 [2024-07-26 23:04:13.180101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.872 [2024-07-26 23:04:13.180132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.872 [2024-07-26 23:04:13.180150] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.872 [2024-07-26 23:04:13.180367] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.872 [2024-07-26 23:04:13.180634] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.872 [2024-07-26 23:04:13.180666] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.872 [2024-07-26 23:04:13.180684] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.872 [2024-07-26 23:04:13.184287] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.872 [2024-07-26 23:04:13.193716] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.872 [2024-07-26 23:04:13.194195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.872 [2024-07-26 23:04:13.194225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.872 [2024-07-26 23:04:13.194242] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.872 [2024-07-26 23:04:13.194488] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.872 [2024-07-26 23:04:13.194741] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.872 [2024-07-26 23:04:13.194768] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.872 [2024-07-26 23:04:13.194785] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.872 [2024-07-26 23:04:13.198410] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.872 [2024-07-26 23:04:13.207817] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.872 [2024-07-26 23:04:13.208206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.872 [2024-07-26 23:04:13.208236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.872 [2024-07-26 23:04:13.208255] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.872 [2024-07-26 23:04:13.208527] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.872 [2024-07-26 23:04:13.208757] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.872 [2024-07-26 23:04:13.208779] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.872 [2024-07-26 23:04:13.208794] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.872 [2024-07-26 23:04:13.212320] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.872 [2024-07-26 23:04:13.221979] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.872 [2024-07-26 23:04:13.222445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.872 [2024-07-26 23:04:13.222498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.872 [2024-07-26 23:04:13.222517] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.872 [2024-07-26 23:04:13.222758] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.872 [2024-07-26 23:04:13.223002] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.873 [2024-07-26 23:04:13.223028] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.873 [2024-07-26 23:04:13.223046] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.873 [2024-07-26 23:04:13.226691] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.873 [2024-07-26 23:04:13.235912] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.873 [2024-07-26 23:04:13.236333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.873 [2024-07-26 23:04:13.236377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.873 [2024-07-26 23:04:13.236394] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.873 [2024-07-26 23:04:13.236643] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.873 [2024-07-26 23:04:13.236887] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.873 [2024-07-26 23:04:13.236913] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.873 [2024-07-26 23:04:13.236930] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.873 [2024-07-26 23:04:13.240570] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.873 [2024-07-26 23:04:13.249989] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.873 [2024-07-26 23:04:13.250411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.873 [2024-07-26 23:04:13.250465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.873 [2024-07-26 23:04:13.250501] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.873 [2024-07-26 23:04:13.250740] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.873 [2024-07-26 23:04:13.250984] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.873 [2024-07-26 23:04:13.251010] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.873 [2024-07-26 23:04:13.251026] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.873 [2024-07-26 23:04:13.254673] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.873 [2024-07-26 23:04:13.263865] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.873 [2024-07-26 23:04:13.264377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.873 [2024-07-26 23:04:13.264430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.873 [2024-07-26 23:04:13.264450] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.873 [2024-07-26 23:04:13.264690] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.873 [2024-07-26 23:04:13.264933] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.873 [2024-07-26 23:04:13.264960] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.873 [2024-07-26 23:04:13.264977] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.873 [2024-07-26 23:04:13.268505] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.873 [2024-07-26 23:04:13.277676] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.873 [2024-07-26 23:04:13.278105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.873 [2024-07-26 23:04:13.278138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.873 [2024-07-26 23:04:13.278158] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.873 [2024-07-26 23:04:13.278404] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.873 [2024-07-26 23:04:13.278651] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.873 [2024-07-26 23:04:13.278677] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.873 [2024-07-26 23:04:13.278694] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.873 [2024-07-26 23:04:13.282294] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.873 [2024-07-26 23:04:13.291610] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.873 [2024-07-26 23:04:13.292040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.873 [2024-07-26 23:04:13.292080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.873 [2024-07-26 23:04:13.292101] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.873 [2024-07-26 23:04:13.292341] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.873 [2024-07-26 23:04:13.292587] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.873 [2024-07-26 23:04:13.292613] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.873 [2024-07-26 23:04:13.292630] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.873 [2024-07-26 23:04:13.296227] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.873 [2024-07-26 23:04:13.305561] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.873 [2024-07-26 23:04:13.306013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.873 [2024-07-26 23:04:13.306046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.873 [2024-07-26 23:04:13.306078] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.873 [2024-07-26 23:04:13.306326] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.873 [2024-07-26 23:04:13.306580] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.873 [2024-07-26 23:04:13.306606] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.873 [2024-07-26 23:04:13.306623] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.873 [2024-07-26 23:04:13.310222] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.873 [2024-07-26 23:04:13.319526] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.873 [2024-07-26 23:04:13.319953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.873 [2024-07-26 23:04:13.319986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.873 [2024-07-26 23:04:13.320005] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.873 [2024-07-26 23:04:13.320266] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.873 [2024-07-26 23:04:13.320512] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.873 [2024-07-26 23:04:13.320538] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.873 [2024-07-26 23:04:13.320560] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.873 [2024-07-26 23:04:13.324158] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.873 [2024-07-26 23:04:13.333468] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.873 [2024-07-26 23:04:13.333988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.873 [2024-07-26 23:04:13.334038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.873 [2024-07-26 23:04:13.334069] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.873 [2024-07-26 23:04:13.334320] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.873 [2024-07-26 23:04:13.334564] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.873 [2024-07-26 23:04:13.334589] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.873 [2024-07-26 23:04:13.334605] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.873 [2024-07-26 23:04:13.338198] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.873 [2024-07-26 23:04:13.347500] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.873 [2024-07-26 23:04:13.347994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.873 [2024-07-26 23:04:13.348022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.873 [2024-07-26 23:04:13.348037] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.873 [2024-07-26 23:04:13.348304] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.873 [2024-07-26 23:04:13.348550] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.873 [2024-07-26 23:04:13.348576] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.873 [2024-07-26 23:04:13.348593] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.873 [2024-07-26 23:04:13.352191] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.873 [2024-07-26 23:04:13.361508] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.873 [2024-07-26 23:04:13.362004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.873 [2024-07-26 23:04:13.362054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:20.873 [2024-07-26 23:04:13.362083] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:20.873 [2024-07-26 23:04:13.362323] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:20.873 [2024-07-26 23:04:13.362566] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.873 [2024-07-26 23:04:13.362593] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.873 [2024-07-26 23:04:13.362609] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.873 [2024-07-26 23:04:13.366211] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.134 [2024-07-26 23:04:13.375551] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.134 [2024-07-26 23:04:13.376144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.134 [2024-07-26 23:04:13.376177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.134 [2024-07-26 23:04:13.376196] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.134 [2024-07-26 23:04:13.376436] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.134 [2024-07-26 23:04:13.376680] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.134 [2024-07-26 23:04:13.376705] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.134 [2024-07-26 23:04:13.376722] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.134 [2024-07-26 23:04:13.380324] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.134 [2024-07-26 23:04:13.389438] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.134 [2024-07-26 23:04:13.389898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.134 [2024-07-26 23:04:13.389931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.134 [2024-07-26 23:04:13.389950] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.134 [2024-07-26 23:04:13.390208] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.134 [2024-07-26 23:04:13.390453] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.134 [2024-07-26 23:04:13.390480] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.134 [2024-07-26 23:04:13.390497] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.135 [2024-07-26 23:04:13.394102] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.135 [2024-07-26 23:04:13.403408] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.135 [2024-07-26 23:04:13.403955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.135 [2024-07-26 23:04:13.404005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.135 [2024-07-26 23:04:13.404024] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.135 [2024-07-26 23:04:13.404281] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.135 [2024-07-26 23:04:13.404526] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.135 [2024-07-26 23:04:13.404553] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.135 [2024-07-26 23:04:13.404570] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.135 [2024-07-26 23:04:13.408176] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.135 [2024-07-26 23:04:13.417316] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.135 [2024-07-26 23:04:13.417770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.135 [2024-07-26 23:04:13.417803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.135 [2024-07-26 23:04:13.417823] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.135 [2024-07-26 23:04:13.418075] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.135 [2024-07-26 23:04:13.418327] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.135 [2024-07-26 23:04:13.418354] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.135 [2024-07-26 23:04:13.418372] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.135 [2024-07-26 23:04:13.421965] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.135 [2024-07-26 23:04:13.431308] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.135 [2024-07-26 23:04:13.431756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.135 [2024-07-26 23:04:13.431789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.135 [2024-07-26 23:04:13.431808] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.135 [2024-07-26 23:04:13.432048] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.135 [2024-07-26 23:04:13.432313] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.135 [2024-07-26 23:04:13.432340] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.135 [2024-07-26 23:04:13.432357] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.135 [2024-07-26 23:04:13.435948] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.135 [2024-07-26 23:04:13.445287] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.135 [2024-07-26 23:04:13.445750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.135 [2024-07-26 23:04:13.445783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.135 [2024-07-26 23:04:13.445801] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.135 [2024-07-26 23:04:13.446041] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.135 [2024-07-26 23:04:13.446301] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.135 [2024-07-26 23:04:13.446328] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.135 [2024-07-26 23:04:13.446346] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.135 [2024-07-26 23:04:13.449925] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.135 [2024-07-26 23:04:13.459268] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.135 [2024-07-26 23:04:13.459721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.135 [2024-07-26 23:04:13.459754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.135 [2024-07-26 23:04:13.459773] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.135 [2024-07-26 23:04:13.460013] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.135 [2024-07-26 23:04:13.460275] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.135 [2024-07-26 23:04:13.460303] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.135 [2024-07-26 23:04:13.460321] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.135 [2024-07-26 23:04:13.463906] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.135 [2024-07-26 23:04:13.473229] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.135 [2024-07-26 23:04:13.473679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.135 [2024-07-26 23:04:13.473712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.135 [2024-07-26 23:04:13.473731] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.135 [2024-07-26 23:04:13.473970] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.135 [2024-07-26 23:04:13.474235] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.135 [2024-07-26 23:04:13.474264] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.135 [2024-07-26 23:04:13.474282] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.135 [2024-07-26 23:04:13.477862] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.135 [2024-07-26 23:04:13.487185] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.135 [2024-07-26 23:04:13.487634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.135 [2024-07-26 23:04:13.487667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.135 [2024-07-26 23:04:13.487686] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.135 [2024-07-26 23:04:13.487925] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.135 [2024-07-26 23:04:13.488190] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.135 [2024-07-26 23:04:13.488219] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.135 [2024-07-26 23:04:13.488236] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.135 [2024-07-26 23:04:13.491814] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.135 [2024-07-26 23:04:13.501137] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.135 [2024-07-26 23:04:13.501561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.135 [2024-07-26 23:04:13.501594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.135 [2024-07-26 23:04:13.501613] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.135 [2024-07-26 23:04:13.501853] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.135 [2024-07-26 23:04:13.502119] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.135 [2024-07-26 23:04:13.502147] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.135 [2024-07-26 23:04:13.502165] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.135 [2024-07-26 23:04:13.505745] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.135 [2024-07-26 23:04:13.515069] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.135 [2024-07-26 23:04:13.515575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.135 [2024-07-26 23:04:13.515631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.135 [2024-07-26 23:04:13.515651] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.135 [2024-07-26 23:04:13.515891] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.135 [2024-07-26 23:04:13.516156] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.135 [2024-07-26 23:04:13.516184] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.135 [2024-07-26 23:04:13.516202] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.135 [2024-07-26 23:04:13.519784] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.135 [2024-07-26 23:04:13.529105] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.135 [2024-07-26 23:04:13.529641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.136 [2024-07-26 23:04:13.529694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.136 [2024-07-26 23:04:13.529713] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.136 [2024-07-26 23:04:13.529952] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.136 [2024-07-26 23:04:13.530215] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.136 [2024-07-26 23:04:13.530244] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.136 [2024-07-26 23:04:13.530261] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.136 [2024-07-26 23:04:13.533842] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.136 [2024-07-26 23:04:13.542948] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.136 [2024-07-26 23:04:13.543409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.136 [2024-07-26 23:04:13.543442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.136 [2024-07-26 23:04:13.543461] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.136 [2024-07-26 23:04:13.543700] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.136 [2024-07-26 23:04:13.543945] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.136 [2024-07-26 23:04:13.543970] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.136 [2024-07-26 23:04:13.543987] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.136 [2024-07-26 23:04:13.547587] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.136 [2024-07-26 23:04:13.556914] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.136 [2024-07-26 23:04:13.557501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.136 [2024-07-26 23:04:13.557559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.136 [2024-07-26 23:04:13.557578] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.136 [2024-07-26 23:04:13.557818] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.136 [2024-07-26 23:04:13.558087] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.136 [2024-07-26 23:04:13.558115] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.136 [2024-07-26 23:04:13.558132] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.136 [2024-07-26 23:04:13.561714] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.136 [2024-07-26 23:04:13.570852] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.136 [2024-07-26 23:04:13.571334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.136 [2024-07-26 23:04:13.571367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.136 [2024-07-26 23:04:13.571387] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.136 [2024-07-26 23:04:13.571627] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.136 [2024-07-26 23:04:13.571872] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.136 [2024-07-26 23:04:13.571897] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.136 [2024-07-26 23:04:13.571913] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.136 [2024-07-26 23:04:13.575515] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.136 [2024-07-26 23:04:13.584852] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.136 [2024-07-26 23:04:13.585288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.136 [2024-07-26 23:04:13.585322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.136 [2024-07-26 23:04:13.585342] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.136 [2024-07-26 23:04:13.585583] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.136 [2024-07-26 23:04:13.585828] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.136 [2024-07-26 23:04:13.585854] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.136 [2024-07-26 23:04:13.585871] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.136 [2024-07-26 23:04:13.589471] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.136 [2024-07-26 23:04:13.598811] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.136 [2024-07-26 23:04:13.599275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.136 [2024-07-26 23:04:13.599308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.136 [2024-07-26 23:04:13.599328] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.136 [2024-07-26 23:04:13.599568] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.136 [2024-07-26 23:04:13.599813] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.136 [2024-07-26 23:04:13.599838] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.136 [2024-07-26 23:04:13.599855] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.136 [2024-07-26 23:04:13.603450] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.136 [2024-07-26 23:04:13.612772] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.136 [2024-07-26 23:04:13.613265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.136 [2024-07-26 23:04:13.613315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.136 [2024-07-26 23:04:13.613334] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.136 [2024-07-26 23:04:13.613573] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.136 [2024-07-26 23:04:13.613817] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.136 [2024-07-26 23:04:13.613843] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.136 [2024-07-26 23:04:13.613859] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.136 [2024-07-26 23:04:13.617462] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.136 [2024-07-26 23:04:13.626805] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.136 [2024-07-26 23:04:13.627280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.136 [2024-07-26 23:04:13.627313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.136 [2024-07-26 23:04:13.627332] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.136 [2024-07-26 23:04:13.627572] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.136 [2024-07-26 23:04:13.627816] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.136 [2024-07-26 23:04:13.627843] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.136 [2024-07-26 23:04:13.627860] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.136 [2024-07-26 23:04:13.631471] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.397 [2024-07-26 23:04:13.640830] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.397 [2024-07-26 23:04:13.641251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-07-26 23:04:13.641284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.397 [2024-07-26 23:04:13.641303] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.397 [2024-07-26 23:04:13.641544] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.397 [2024-07-26 23:04:13.641789] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.397 [2024-07-26 23:04:13.641814] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.397 [2024-07-26 23:04:13.641831] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.397 [2024-07-26 23:04:13.645451] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.397 [2024-07-26 23:04:13.654792] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.397 [2024-07-26 23:04:13.655213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-07-26 23:04:13.655246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.397 [2024-07-26 23:04:13.655271] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.397 [2024-07-26 23:04:13.655512] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.397 [2024-07-26 23:04:13.655757] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.397 [2024-07-26 23:04:13.655782] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.397 [2024-07-26 23:04:13.655799] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.397 [2024-07-26 23:04:13.659404] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.397 [2024-07-26 23:04:13.668746] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.397 [2024-07-26 23:04:13.669217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-07-26 23:04:13.669250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.397 [2024-07-26 23:04:13.669270] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.397 [2024-07-26 23:04:13.669510] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.397 [2024-07-26 23:04:13.669755] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.397 [2024-07-26 23:04:13.669780] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.397 [2024-07-26 23:04:13.669797] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.397 [2024-07-26 23:04:13.673389] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.397 [2024-07-26 23:04:13.682713] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.397 [2024-07-26 23:04:13.683196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-07-26 23:04:13.683229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.397 [2024-07-26 23:04:13.683247] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.397 [2024-07-26 23:04:13.683487] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.397 [2024-07-26 23:04:13.683731] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.397 [2024-07-26 23:04:13.683757] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.397 [2024-07-26 23:04:13.683774] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.397 [2024-07-26 23:04:13.687368] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.397 [2024-07-26 23:04:13.696709] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.397 [2024-07-26 23:04:13.697148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-07-26 23:04:13.697181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.397 [2024-07-26 23:04:13.697200] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.397 [2024-07-26 23:04:13.697440] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.397 [2024-07-26 23:04:13.697686] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.397 [2024-07-26 23:04:13.697717] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.397 [2024-07-26 23:04:13.697735] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.397 [2024-07-26 23:04:13.701339] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.397 [2024-07-26 23:04:13.710707] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.397 [2024-07-26 23:04:13.711164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-07-26 23:04:13.711197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.397 [2024-07-26 23:04:13.711216] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.397 [2024-07-26 23:04:13.711456] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.397 [2024-07-26 23:04:13.711702] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.397 [2024-07-26 23:04:13.711728] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.397 [2024-07-26 23:04:13.711744] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.398 [2024-07-26 23:04:13.715339] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.398 [2024-07-26 23:04:13.724685] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.398 [2024-07-26 23:04:13.725135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-07-26 23:04:13.725168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.398 [2024-07-26 23:04:13.725188] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.398 [2024-07-26 23:04:13.725428] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.398 [2024-07-26 23:04:13.725673] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.398 [2024-07-26 23:04:13.725699] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.398 [2024-07-26 23:04:13.725716] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.398 [2024-07-26 23:04:13.729309] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.398 [2024-07-26 23:04:13.738661] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.398 [2024-07-26 23:04:13.739115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-07-26 23:04:13.739148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.398 [2024-07-26 23:04:13.739167] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.398 [2024-07-26 23:04:13.739406] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.398 [2024-07-26 23:04:13.739650] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.398 [2024-07-26 23:04:13.739675] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.398 [2024-07-26 23:04:13.739692] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.398 [2024-07-26 23:04:13.743290] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.398 [2024-07-26 23:04:13.752625] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.398 [2024-07-26 23:04:13.753098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-07-26 23:04:13.753130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.398 [2024-07-26 23:04:13.753149] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.398 [2024-07-26 23:04:13.753389] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.398 [2024-07-26 23:04:13.753634] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.398 [2024-07-26 23:04:13.753659] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.398 [2024-07-26 23:04:13.753676] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.398 [2024-07-26 23:04:13.757274] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.398 [2024-07-26 23:04:13.766599] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.398 [2024-07-26 23:04:13.767050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-07-26 23:04:13.767089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.398 [2024-07-26 23:04:13.767108] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.398 [2024-07-26 23:04:13.767358] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.398 [2024-07-26 23:04:13.767602] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.398 [2024-07-26 23:04:13.767627] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.398 [2024-07-26 23:04:13.767643] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.398 [2024-07-26 23:04:13.771239] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.398 [2024-07-26 23:04:13.780595] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.398 [2024-07-26 23:04:13.781066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-07-26 23:04:13.781100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.398 [2024-07-26 23:04:13.781126] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.398 [2024-07-26 23:04:13.781365] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.398 [2024-07-26 23:04:13.781610] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.398 [2024-07-26 23:04:13.781636] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.398 [2024-07-26 23:04:13.781654] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.398 [2024-07-26 23:04:13.785255] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.398 [2024-07-26 23:04:13.794576] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.398 [2024-07-26 23:04:13.795010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-07-26 23:04:13.795043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.398 [2024-07-26 23:04:13.795072] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.398 [2024-07-26 23:04:13.795329] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.398 [2024-07-26 23:04:13.795573] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.398 [2024-07-26 23:04:13.795600] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.398 [2024-07-26 23:04:13.795617] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.398 [2024-07-26 23:04:13.799213] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.398 [2024-07-26 23:04:13.808507] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.398 [2024-07-26 23:04:13.808959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-07-26 23:04:13.808991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.398 [2024-07-26 23:04:13.809010] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.398 [2024-07-26 23:04:13.809268] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.398 [2024-07-26 23:04:13.809513] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.398 [2024-07-26 23:04:13.809540] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.398 [2024-07-26 23:04:13.809557] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.398 [2024-07-26 23:04:13.813149] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.398 [2024-07-26 23:04:13.822459] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.398 [2024-07-26 23:04:13.822927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-07-26 23:04:13.822961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.398 [2024-07-26 23:04:13.822980] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.398 [2024-07-26 23:04:13.823239] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.398 [2024-07-26 23:04:13.823485] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.398 [2024-07-26 23:04:13.823511] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.398 [2024-07-26 23:04:13.823528] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.398 [2024-07-26 23:04:13.827118] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.398 [2024-07-26 23:04:13.836419] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.398 [2024-07-26 23:04:13.836883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-07-26 23:04:13.836915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.398 [2024-07-26 23:04:13.836934] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.398 [2024-07-26 23:04:13.837193] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.398 [2024-07-26 23:04:13.837439] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.398 [2024-07-26 23:04:13.837465] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.398 [2024-07-26 23:04:13.837487] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.398 [2024-07-26 23:04:13.841077] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.398 [2024-07-26 23:04:13.850378] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.398 [2024-07-26 23:04:13.850832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-07-26 23:04:13.850865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.398 [2024-07-26 23:04:13.850884] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.398 [2024-07-26 23:04:13.851143] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.398 [2024-07-26 23:04:13.851390] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.398 [2024-07-26 23:04:13.851416] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.398 [2024-07-26 23:04:13.851433] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.399 [2024-07-26 23:04:13.855009] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.399 [2024-07-26 23:04:13.864329] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.399 [2024-07-26 23:04:13.864755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-07-26 23:04:13.864787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.399 [2024-07-26 23:04:13.864806] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.399 [2024-07-26 23:04:13.865045] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.399 [2024-07-26 23:04:13.865307] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.399 [2024-07-26 23:04:13.865335] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.399 [2024-07-26 23:04:13.865352] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.399 [2024-07-26 23:04:13.868929] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.399 [2024-07-26 23:04:13.878242] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.399 [2024-07-26 23:04:13.878691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-07-26 23:04:13.878723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.399 [2024-07-26 23:04:13.878742] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.399 [2024-07-26 23:04:13.878982] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.399 [2024-07-26 23:04:13.879243] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.399 [2024-07-26 23:04:13.879271] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.399 [2024-07-26 23:04:13.879288] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.399 [2024-07-26 23:04:13.882871] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.399 [2024-07-26 23:04:13.892190] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.399 [2024-07-26 23:04:13.892650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-07-26 23:04:13.892684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.399 [2024-07-26 23:04:13.892703] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.399 [2024-07-26 23:04:13.892942] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.399 [2024-07-26 23:04:13.893204] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.399 [2024-07-26 23:04:13.893233] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.399 [2024-07-26 23:04:13.893251] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.399 [2024-07-26 23:04:13.896827] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.659 [2024-07-26 23:04:13.906182] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.659 [2024-07-26 23:04:13.906589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-26 23:04:13.906622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.659 [2024-07-26 23:04:13.906642] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.659 [2024-07-26 23:04:13.906882] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.659 [2024-07-26 23:04:13.907146] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.659 [2024-07-26 23:04:13.907173] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.659 [2024-07-26 23:04:13.907190] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.659 [2024-07-26 23:04:13.910766] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.659 [2024-07-26 23:04:13.920105] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.659 [2024-07-26 23:04:13.920537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-26 23:04:13.920571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.659 [2024-07-26 23:04:13.920590] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.659 [2024-07-26 23:04:13.920830] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.659 [2024-07-26 23:04:13.921084] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.659 [2024-07-26 23:04:13.921111] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.659 [2024-07-26 23:04:13.921129] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.659 [2024-07-26 23:04:13.924716] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.659 [2024-07-26 23:04:13.934045] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.659 [2024-07-26 23:04:13.934504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-26 23:04:13.934536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.659 [2024-07-26 23:04:13.934556] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.659 [2024-07-26 23:04:13.934796] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.659 [2024-07-26 23:04:13.935047] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.659 [2024-07-26 23:04:13.935089] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.659 [2024-07-26 23:04:13.935110] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.659 [2024-07-26 23:04:13.938700] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.659 [2024-07-26 23:04:13.948019] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.659 [2024-07-26 23:04:13.948482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-26 23:04:13.948515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.659 [2024-07-26 23:04:13.948534] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.659 [2024-07-26 23:04:13.948774] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.659 [2024-07-26 23:04:13.949018] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.659 [2024-07-26 23:04:13.949044] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.659 [2024-07-26 23:04:13.949071] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.659 [2024-07-26 23:04:13.952663] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.659 [2024-07-26 23:04:13.961967] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.659 [2024-07-26 23:04:13.962427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-26 23:04:13.962459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.660 [2024-07-26 23:04:13.962479] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.660 [2024-07-26 23:04:13.962718] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.660 [2024-07-26 23:04:13.962963] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.660 [2024-07-26 23:04:13.962988] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.660 [2024-07-26 23:04:13.963005] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.660 [2024-07-26 23:04:13.966634] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.660 [2024-07-26 23:04:13.975951] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.660 [2024-07-26 23:04:13.976417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-26 23:04:13.976450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.660 [2024-07-26 23:04:13.976469] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.660 [2024-07-26 23:04:13.976709] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.660 [2024-07-26 23:04:13.976953] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.660 [2024-07-26 23:04:13.976979] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.660 [2024-07-26 23:04:13.976996] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.660 [2024-07-26 23:04:13.980593] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.660 [2024-07-26 23:04:13.989901] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.660 [2024-07-26 23:04:13.990360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-26 23:04:13.990393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.660 [2024-07-26 23:04:13.990412] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.660 [2024-07-26 23:04:13.990651] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.660 [2024-07-26 23:04:13.990895] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.660 [2024-07-26 23:04:13.990921] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.660 [2024-07-26 23:04:13.990938] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.660 [2024-07-26 23:04:13.994534] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.660 [2024-07-26 23:04:14.003844] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.660 [2024-07-26 23:04:14.004279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-26 23:04:14.004312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.660 [2024-07-26 23:04:14.004331] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.660 [2024-07-26 23:04:14.004570] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.660 [2024-07-26 23:04:14.004814] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.660 [2024-07-26 23:04:14.004840] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.660 [2024-07-26 23:04:14.004857] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.660 [2024-07-26 23:04:14.008455] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.660 [2024-07-26 23:04:14.017760] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.660 [2024-07-26 23:04:14.018225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-26 23:04:14.018258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.660 [2024-07-26 23:04:14.018277] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.660 [2024-07-26 23:04:14.018517] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.660 [2024-07-26 23:04:14.018761] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.660 [2024-07-26 23:04:14.018787] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.660 [2024-07-26 23:04:14.018804] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.660 [2024-07-26 23:04:14.022395] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.660 [2024-07-26 23:04:14.031732] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.660 [2024-07-26 23:04:14.032203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-26 23:04:14.032236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.660 [2024-07-26 23:04:14.032261] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.660 [2024-07-26 23:04:14.032501] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.660 [2024-07-26 23:04:14.032745] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.660 [2024-07-26 23:04:14.032771] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.660 [2024-07-26 23:04:14.032788] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.660 [2024-07-26 23:04:14.036383] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.660 [2024-07-26 23:04:14.045696] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.660 [2024-07-26 23:04:14.046101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-26 23:04:14.046134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.660 [2024-07-26 23:04:14.046153] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.660 [2024-07-26 23:04:14.046392] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.660 [2024-07-26 23:04:14.046636] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.660 [2024-07-26 23:04:14.046663] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.660 [2024-07-26 23:04:14.046679] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.660 [2024-07-26 23:04:14.050275] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.660 [2024-07-26 23:04:14.059583] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.660 [2024-07-26 23:04:14.060014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-26 23:04:14.060047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.660 [2024-07-26 23:04:14.060082] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.660 [2024-07-26 23:04:14.060328] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.660 [2024-07-26 23:04:14.060574] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.660 [2024-07-26 23:04:14.060600] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.660 [2024-07-26 23:04:14.060617] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.660 [2024-07-26 23:04:14.064211] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.660 [2024-07-26 23:04:14.073511] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.660 [2024-07-26 23:04:14.073973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-26 23:04:14.074006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.660 [2024-07-26 23:04:14.074026] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.660 [2024-07-26 23:04:14.074282] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.660 [2024-07-26 23:04:14.074538] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.660 [2024-07-26 23:04:14.074565] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.660 [2024-07-26 23:04:14.074583] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.660 [2024-07-26 23:04:14.078174] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.660 [2024-07-26 23:04:14.087490] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.660 [2024-07-26 23:04:14.087919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-26 23:04:14.087953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.660 [2024-07-26 23:04:14.087972] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.660 [2024-07-26 23:04:14.088231] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.661 [2024-07-26 23:04:14.088480] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.661 [2024-07-26 23:04:14.088506] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.661 [2024-07-26 23:04:14.088523] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.661 [2024-07-26 23:04:14.092114] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.661 [2024-07-26 23:04:14.101410] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.661 [2024-07-26 23:04:14.101844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-26 23:04:14.101877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.661 [2024-07-26 23:04:14.101896] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.661 [2024-07-26 23:04:14.102153] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.661 [2024-07-26 23:04:14.102399] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.661 [2024-07-26 23:04:14.102425] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.661 [2024-07-26 23:04:14.102442] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.661 [2024-07-26 23:04:14.106020] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.661 [2024-07-26 23:04:14.115331] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.661 [2024-07-26 23:04:14.115781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-26 23:04:14.115814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.661 [2024-07-26 23:04:14.115833] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.661 [2024-07-26 23:04:14.116090] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.661 [2024-07-26 23:04:14.116338] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.661 [2024-07-26 23:04:14.116365] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.661 [2024-07-26 23:04:14.116382] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.661 [2024-07-26 23:04:14.119961] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.661 [2024-07-26 23:04:14.129294] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.661 [2024-07-26 23:04:14.129720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-26 23:04:14.129753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.661 [2024-07-26 23:04:14.129772] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.661 [2024-07-26 23:04:14.130011] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.661 [2024-07-26 23:04:14.130264] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.661 [2024-07-26 23:04:14.130291] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.661 [2024-07-26 23:04:14.130308] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.661 [2024-07-26 23:04:14.133896] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.661 [2024-07-26 23:04:14.143253] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.661 [2024-07-26 23:04:14.143725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-26 23:04:14.143758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.661 [2024-07-26 23:04:14.143778] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.661 [2024-07-26 23:04:14.144018] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.661 [2024-07-26 23:04:14.144272] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.661 [2024-07-26 23:04:14.144299] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.661 [2024-07-26 23:04:14.144315] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.661 [2024-07-26 23:04:14.147904] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.661 [2024-07-26 23:04:14.157262] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.661 [2024-07-26 23:04:14.157719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-26 23:04:14.157751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.661 [2024-07-26 23:04:14.157771] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.661 [2024-07-26 23:04:14.158010] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.661 [2024-07-26 23:04:14.158264] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.661 [2024-07-26 23:04:14.158290] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.661 [2024-07-26 23:04:14.158307] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.921 [2024-07-26 23:04:14.161897] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.921 [2024-07-26 23:04:14.171234] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.921 [2024-07-26 23:04:14.171692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-26 23:04:14.171725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.921 [2024-07-26 23:04:14.171750] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.921 [2024-07-26 23:04:14.171990] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.921 [2024-07-26 23:04:14.172253] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.921 [2024-07-26 23:04:14.172281] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.921 [2024-07-26 23:04:14.172298] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.921 [2024-07-26 23:04:14.175880] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.921 [2024-07-26 23:04:14.185227] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.921 [2024-07-26 23:04:14.185690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-26 23:04:14.185722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.921 [2024-07-26 23:04:14.185741] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.921 [2024-07-26 23:04:14.185980] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.921 [2024-07-26 23:04:14.186471] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.921 [2024-07-26 23:04:14.186498] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.921 [2024-07-26 23:04:14.186515] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.921 [2024-07-26 23:04:14.190120] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.921 [2024-07-26 23:04:14.199220] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.921 [2024-07-26 23:04:14.199678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-26 23:04:14.199711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.921 [2024-07-26 23:04:14.199730] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.921 [2024-07-26 23:04:14.199970] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.921 [2024-07-26 23:04:14.200233] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.921 [2024-07-26 23:04:14.200261] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.921 [2024-07-26 23:04:14.200279] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.921 [2024-07-26 23:04:14.203855] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.921 [2024-07-26 23:04:14.213182] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.921 [2024-07-26 23:04:14.213615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-26 23:04:14.213647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.921 [2024-07-26 23:04:14.213666] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.921 [2024-07-26 23:04:14.213905] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.921 [2024-07-26 23:04:14.214168] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.921 [2024-07-26 23:04:14.214202] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.921 [2024-07-26 23:04:14.214220] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.921 [2024-07-26 23:04:14.217799] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.921 [2024-07-26 23:04:14.227216] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.921 [2024-07-26 23:04:14.227683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-26 23:04:14.227718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.921 [2024-07-26 23:04:14.227739] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.921 [2024-07-26 23:04:14.227979] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.921 [2024-07-26 23:04:14.228254] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.921 [2024-07-26 23:04:14.228281] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.921 [2024-07-26 23:04:14.228299] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.921 [2024-07-26 23:04:14.231896] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.921 [2024-07-26 23:04:14.241243] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.921 [2024-07-26 23:04:14.241678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-26 23:04:14.241714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.921 [2024-07-26 23:04:14.241734] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.921 [2024-07-26 23:04:14.241975] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.921 [2024-07-26 23:04:14.242233] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.921 [2024-07-26 23:04:14.242261] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.921 [2024-07-26 23:04:14.242278] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.921 [2024-07-26 23:04:14.245874] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.921 [2024-07-26 23:04:14.255237] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.921 [2024-07-26 23:04:14.255694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-26 23:04:14.255727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.921 [2024-07-26 23:04:14.255746] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.921 [2024-07-26 23:04:14.255986] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.921 [2024-07-26 23:04:14.256245] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.921 [2024-07-26 23:04:14.256271] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.921 [2024-07-26 23:04:14.256289] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.921 [2024-07-26 23:04:14.259883] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.921 [2024-07-26 23:04:14.269263] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.921 [2024-07-26 23:04:14.269702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-26 23:04:14.269734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.921 [2024-07-26 23:04:14.269753] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.921 [2024-07-26 23:04:14.269993] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.921 [2024-07-26 23:04:14.270253] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.921 [2024-07-26 23:04:14.270280] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.921 [2024-07-26 23:04:14.270297] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.921 [2024-07-26 23:04:14.273890] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.921 [2024-07-26 23:04:14.283217] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.921 [2024-07-26 23:04:14.283654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-26 23:04:14.283687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.921 [2024-07-26 23:04:14.283706] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.921 [2024-07-26 23:04:14.283946] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.921 [2024-07-26 23:04:14.284200] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.921 [2024-07-26 23:04:14.284226] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.921 [2024-07-26 23:04:14.284243] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.921 [2024-07-26 23:04:14.287824] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.921 [2024-07-26 23:04:14.297141] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.921 [2024-07-26 23:04:14.297592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-26 23:04:14.297624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.921 [2024-07-26 23:04:14.297643] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.921 [2024-07-26 23:04:14.297883] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.921 [2024-07-26 23:04:14.298137] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.921 [2024-07-26 23:04:14.298163] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.921 [2024-07-26 23:04:14.298180] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.921 [2024-07-26 23:04:14.301760] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.921 [2024-07-26 23:04:14.311077] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.921 [2024-07-26 23:04:14.311501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-26 23:04:14.311533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.921 [2024-07-26 23:04:14.311551] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.921 [2024-07-26 23:04:14.311797] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.921 [2024-07-26 23:04:14.312043] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.921 [2024-07-26 23:04:14.312078] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.921 [2024-07-26 23:04:14.312096] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.921 [2024-07-26 23:04:14.315679] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.921 [2024-07-26 23:04:14.324981] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.921 [2024-07-26 23:04:14.325454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-26 23:04:14.325486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.921 [2024-07-26 23:04:14.325505] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.921 [2024-07-26 23:04:14.325745] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.921 [2024-07-26 23:04:14.325990] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.921 [2024-07-26 23:04:14.326016] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.921 [2024-07-26 23:04:14.326032] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.921 [2024-07-26 23:04:14.329621] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.921 [2024-07-26 23:04:14.338932] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.921 [2024-07-26 23:04:14.339369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-26 23:04:14.339401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.921 [2024-07-26 23:04:14.339420] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.922 [2024-07-26 23:04:14.339660] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.922 [2024-07-26 23:04:14.339904] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.922 [2024-07-26 23:04:14.339929] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.922 [2024-07-26 23:04:14.339946] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.922 [2024-07-26 23:04:14.343537] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.922 [2024-07-26 23:04:14.352842] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.922 [2024-07-26 23:04:14.353313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-26 23:04:14.353345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.922 [2024-07-26 23:04:14.353363] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.922 [2024-07-26 23:04:14.353603] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.922 [2024-07-26 23:04:14.353847] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.922 [2024-07-26 23:04:14.353873] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.922 [2024-07-26 23:04:14.353895] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.922 [2024-07-26 23:04:14.357492] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.922 [2024-07-26 23:04:14.366793] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.922 [2024-07-26 23:04:14.367251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-26 23:04:14.367283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.922 [2024-07-26 23:04:14.367301] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.922 [2024-07-26 23:04:14.367541] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.922 [2024-07-26 23:04:14.367785] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.922 [2024-07-26 23:04:14.367811] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.922 [2024-07-26 23:04:14.367827] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.922 [2024-07-26 23:04:14.371420] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.922 [2024-07-26 23:04:14.380726] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.922 [2024-07-26 23:04:14.381179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-26 23:04:14.381219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.922 [2024-07-26 23:04:14.381238] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.922 [2024-07-26 23:04:14.381477] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.922 [2024-07-26 23:04:14.381721] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.922 [2024-07-26 23:04:14.381746] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.922 [2024-07-26 23:04:14.381764] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.922 [2024-07-26 23:04:14.385356] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.922 [2024-07-26 23:04:14.394656] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.922 [2024-07-26 23:04:14.395107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-26 23:04:14.395149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.922 [2024-07-26 23:04:14.395168] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.922 [2024-07-26 23:04:14.395408] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.922 [2024-07-26 23:04:14.395651] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.922 [2024-07-26 23:04:14.395677] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.922 [2024-07-26 23:04:14.395693] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.922 [2024-07-26 23:04:14.399278] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.922 [2024-07-26 23:04:14.408578] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.922 [2024-07-26 23:04:14.409009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-26 23:04:14.409048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.922 [2024-07-26 23:04:14.409077] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:21.922 [2024-07-26 23:04:14.409319] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:21.922 [2024-07-26 23:04:14.409564] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.922 [2024-07-26 23:04:14.409589] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.922 [2024-07-26 23:04:14.409605] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.922 [2024-07-26 23:04:14.413190] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.922 [2024-07-26 23:04:14.422489] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.922 [2024-07-26 23:04:14.422951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-26 23:04:14.422984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:21.922 [2024-07-26 23:04:14.423003] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.182 [2024-07-26 23:04:14.423252] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.182 [2024-07-26 23:04:14.423499] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.182 [2024-07-26 23:04:14.423526] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.182 [2024-07-26 23:04:14.423543] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.182 [2024-07-26 23:04:14.427130] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.182 [2024-07-26 23:04:14.436424] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.182 [2024-07-26 23:04:14.436872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.182 [2024-07-26 23:04:14.436905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.182 [2024-07-26 23:04:14.436923] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.182 [2024-07-26 23:04:14.437172] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.182 [2024-07-26 23:04:14.437416] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.182 [2024-07-26 23:04:14.437442] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.182 [2024-07-26 23:04:14.437459] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.182 [2024-07-26 23:04:14.441034] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.182 [2024-07-26 23:04:14.450336] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.182 [2024-07-26 23:04:14.450806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.182 [2024-07-26 23:04:14.450839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.182 [2024-07-26 23:04:14.450858] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.182 [2024-07-26 23:04:14.451110] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.182 [2024-07-26 23:04:14.451360] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.182 [2024-07-26 23:04:14.451386] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.182 [2024-07-26 23:04:14.451403] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.182 [2024-07-26 23:04:14.454977] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.182 [2024-07-26 23:04:14.464276] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.182 [2024-07-26 23:04:14.464700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.182 [2024-07-26 23:04:14.464732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.182 [2024-07-26 23:04:14.464751] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.182 [2024-07-26 23:04:14.464990] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.182 [2024-07-26 23:04:14.465244] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.182 [2024-07-26 23:04:14.465271] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.182 [2024-07-26 23:04:14.465288] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.182 [2024-07-26 23:04:14.468865] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.182 [2024-07-26 23:04:14.478164] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.182 [2024-07-26 23:04:14.478624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.182 [2024-07-26 23:04:14.478655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.182 [2024-07-26 23:04:14.478674] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.182 [2024-07-26 23:04:14.478913] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.182 [2024-07-26 23:04:14.479168] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.182 [2024-07-26 23:04:14.479196] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.182 [2024-07-26 23:04:14.479213] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.182 [2024-07-26 23:04:14.482794] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.182 [2024-07-26 23:04:14.492093] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.182 [2024-07-26 23:04:14.492526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.182 [2024-07-26 23:04:14.492558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.182 [2024-07-26 23:04:14.492577] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.182 [2024-07-26 23:04:14.492816] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.182 [2024-07-26 23:04:14.493069] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.182 [2024-07-26 23:04:14.493096] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.182 [2024-07-26 23:04:14.493113] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.183 [2024-07-26 23:04:14.496696] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.183 [2024-07-26 23:04:14.505989] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.183 [2024-07-26 23:04:14.506462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.183 [2024-07-26 23:04:14.506494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.183 [2024-07-26 23:04:14.506514] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.183 [2024-07-26 23:04:14.506753] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.183 [2024-07-26 23:04:14.506997] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.183 [2024-07-26 23:04:14.507023] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.183 [2024-07-26 23:04:14.507040] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.183 [2024-07-26 23:04:14.510622] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.183 [2024-07-26 23:04:14.519911] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.183 [2024-07-26 23:04:14.520322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.183 [2024-07-26 23:04:14.520354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.183 [2024-07-26 23:04:14.520373] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.183 [2024-07-26 23:04:14.520613] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.183 [2024-07-26 23:04:14.520857] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.183 [2024-07-26 23:04:14.520882] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.183 [2024-07-26 23:04:14.520899] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.183 [2024-07-26 23:04:14.524485] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.183 [2024-07-26 23:04:14.533777] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.183 [2024-07-26 23:04:14.534234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.183 [2024-07-26 23:04:14.534267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.183 [2024-07-26 23:04:14.534286] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.183 [2024-07-26 23:04:14.534525] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.183 [2024-07-26 23:04:14.534769] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.183 [2024-07-26 23:04:14.534795] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.183 [2024-07-26 23:04:14.534811] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.183 [2024-07-26 23:04:14.538398] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.183 [2024-07-26 23:04:14.547689] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.183 [2024-07-26 23:04:14.548135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.183 [2024-07-26 23:04:14.548168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.183 [2024-07-26 23:04:14.548194] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.183 [2024-07-26 23:04:14.548435] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.183 [2024-07-26 23:04:14.548681] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.183 [2024-07-26 23:04:14.548707] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.183 [2024-07-26 23:04:14.548724] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.183 [2024-07-26 23:04:14.552310] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.183 [2024-07-26 23:04:14.561605] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.183 [2024-07-26 23:04:14.562056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.183 [2024-07-26 23:04:14.562093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.183 [2024-07-26 23:04:14.562112] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.183 [2024-07-26 23:04:14.562352] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.183 [2024-07-26 23:04:14.562595] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.183 [2024-07-26 23:04:14.562621] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.183 [2024-07-26 23:04:14.562638] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.183 [2024-07-26 23:04:14.566225] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.183 [2024-07-26 23:04:14.575520] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.183 [2024-07-26 23:04:14.575980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.183 [2024-07-26 23:04:14.576013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.183 [2024-07-26 23:04:14.576032] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.183 [2024-07-26 23:04:14.576283] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.183 [2024-07-26 23:04:14.576527] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.183 [2024-07-26 23:04:14.576553] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.183 [2024-07-26 23:04:14.576570] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.183 [2024-07-26 23:04:14.580151] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.183 [2024-07-26 23:04:14.589450] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.183 [2024-07-26 23:04:14.589902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.183 [2024-07-26 23:04:14.589934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.183 [2024-07-26 23:04:14.589953] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.183 [2024-07-26 23:04:14.590204] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.183 [2024-07-26 23:04:14.590450] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.183 [2024-07-26 23:04:14.590481] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.183 [2024-07-26 23:04:14.590498] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.183 [2024-07-26 23:04:14.594082] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.183 [2024-07-26 23:04:14.603374] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.183 [2024-07-26 23:04:14.603829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.183 [2024-07-26 23:04:14.603861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.183 [2024-07-26 23:04:14.603880] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.183 [2024-07-26 23:04:14.604131] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.183 [2024-07-26 23:04:14.604376] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.183 [2024-07-26 23:04:14.604402] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.183 [2024-07-26 23:04:14.604419] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.183 [2024-07-26 23:04:14.607995] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.183 [2024-07-26 23:04:14.617290] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.183 [2024-07-26 23:04:14.617740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.183 [2024-07-26 23:04:14.617772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.183 [2024-07-26 23:04:14.617790] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.183 [2024-07-26 23:04:14.618029] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.183 [2024-07-26 23:04:14.618283] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.183 [2024-07-26 23:04:14.618309] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.183 [2024-07-26 23:04:14.618326] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.183 [2024-07-26 23:04:14.621902] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.183 [2024-07-26 23:04:14.631202] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.183 [2024-07-26 23:04:14.631634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.183 [2024-07-26 23:04:14.631667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.183 [2024-07-26 23:04:14.631685] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.183 [2024-07-26 23:04:14.631926] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.183 [2024-07-26 23:04:14.632180] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.184 [2024-07-26 23:04:14.632207] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.184 [2024-07-26 23:04:14.632225] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.184 [2024-07-26 23:04:14.635800] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.184 [2024-07-26 23:04:14.645128] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.184 [2024-07-26 23:04:14.645551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.184 [2024-07-26 23:04:14.645583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.184 [2024-07-26 23:04:14.645602] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.184 [2024-07-26 23:04:14.645842] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.184 [2024-07-26 23:04:14.646097] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.184 [2024-07-26 23:04:14.646123] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.184 [2024-07-26 23:04:14.646140] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.184 [2024-07-26 23:04:14.649715] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.184 [2024-07-26 23:04:14.659009] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.184 [2024-07-26 23:04:14.659444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.184 [2024-07-26 23:04:14.659477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.184 [2024-07-26 23:04:14.659496] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.184 [2024-07-26 23:04:14.659736] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.184 [2024-07-26 23:04:14.659982] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.184 [2024-07-26 23:04:14.660008] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.184 [2024-07-26 23:04:14.660025] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.184 [2024-07-26 23:04:14.663611] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.184 [2024-07-26 23:04:14.672909] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.184 [2024-07-26 23:04:14.673380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.184 [2024-07-26 23:04:14.673413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.184 [2024-07-26 23:04:14.673432] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.184 [2024-07-26 23:04:14.673672] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.184 [2024-07-26 23:04:14.673917] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.184 [2024-07-26 23:04:14.673944] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.184 [2024-07-26 23:04:14.673962] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.184 [2024-07-26 23:04:14.677545] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.444 [2024-07-26 23:04:14.686850] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.445 [2024-07-26 23:04:14.687283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-26 23:04:14.687316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.445 [2024-07-26 23:04:14.687335] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.445 [2024-07-26 23:04:14.687581] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.445 [2024-07-26 23:04:14.687827] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.445 [2024-07-26 23:04:14.687852] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.445 [2024-07-26 23:04:14.687869] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.445 [2024-07-26 23:04:14.691458] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.445 [2024-07-26 23:04:14.700759] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.445 [2024-07-26 23:04:14.701211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-26 23:04:14.701244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.445 [2024-07-26 23:04:14.701263] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.445 [2024-07-26 23:04:14.701503] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.445 [2024-07-26 23:04:14.701747] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.445 [2024-07-26 23:04:14.701774] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.445 [2024-07-26 23:04:14.701791] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.445 [2024-07-26 23:04:14.705374] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.445 [2024-07-26 23:04:14.714667] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.445 [2024-07-26 23:04:14.715131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-26 23:04:14.715163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.445 [2024-07-26 23:04:14.715182] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.445 [2024-07-26 23:04:14.715422] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.445 [2024-07-26 23:04:14.715666] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.445 [2024-07-26 23:04:14.715691] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.445 [2024-07-26 23:04:14.715708] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.445 [2024-07-26 23:04:14.719294] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.445 [2024-07-26 23:04:14.728596] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.445 [2024-07-26 23:04:14.729046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-26 23:04:14.729085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.445 [2024-07-26 23:04:14.729105] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.445 [2024-07-26 23:04:14.729344] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.445 [2024-07-26 23:04:14.729590] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.445 [2024-07-26 23:04:14.729615] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.445 [2024-07-26 23:04:14.729637] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.445 [2024-07-26 23:04:14.733238] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.445 [2024-07-26 23:04:14.742533] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.445 [2024-07-26 23:04:14.742969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-26 23:04:14.743002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.445 [2024-07-26 23:04:14.743020] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.445 [2024-07-26 23:04:14.743268] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.445 [2024-07-26 23:04:14.743514] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.445 [2024-07-26 23:04:14.743540] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.445 [2024-07-26 23:04:14.743556] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.445 [2024-07-26 23:04:14.747142] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.445 [2024-07-26 23:04:14.756435] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.445 [2024-07-26 23:04:14.756888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-26 23:04:14.756920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.445 [2024-07-26 23:04:14.756939] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.445 [2024-07-26 23:04:14.757188] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.445 [2024-07-26 23:04:14.757433] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.445 [2024-07-26 23:04:14.757458] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.445 [2024-07-26 23:04:14.757475] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.445 [2024-07-26 23:04:14.761052] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.445 [2024-07-26 23:04:14.770353] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.445 [2024-07-26 23:04:14.770808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-26 23:04:14.770840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.445 [2024-07-26 23:04:14.770859] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.445 [2024-07-26 23:04:14.771110] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.445 [2024-07-26 23:04:14.771353] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.445 [2024-07-26 23:04:14.771379] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.445 [2024-07-26 23:04:14.771395] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.445 [2024-07-26 23:04:14.774970] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.445 [2024-07-26 23:04:14.784306] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.445 [2024-07-26 23:04:14.784776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-26 23:04:14.784809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.445 [2024-07-26 23:04:14.784829] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.445 [2024-07-26 23:04:14.785077] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.445 [2024-07-26 23:04:14.785329] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.445 [2024-07-26 23:04:14.785356] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.445 [2024-07-26 23:04:14.785373] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.445 [2024-07-26 23:04:14.788948] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.445 [2024-07-26 23:04:14.798260] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.445 [2024-07-26 23:04:14.798665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-26 23:04:14.798697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.445 [2024-07-26 23:04:14.798716] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.445 [2024-07-26 23:04:14.798956] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.445 [2024-07-26 23:04:14.799213] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.445 [2024-07-26 23:04:14.799239] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.445 [2024-07-26 23:04:14.799257] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.445 [2024-07-26 23:04:14.802831] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.445 [2024-07-26 23:04:14.812135] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.445 [2024-07-26 23:04:14.812584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-26 23:04:14.812616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.445 [2024-07-26 23:04:14.812635] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.445 [2024-07-26 23:04:14.812874] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.445 [2024-07-26 23:04:14.813129] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.445 [2024-07-26 23:04:14.813154] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.446 [2024-07-26 23:04:14.813171] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.446 [2024-07-26 23:04:14.816746] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.446 [2024-07-26 23:04:14.826040] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.446 [2024-07-26 23:04:14.826475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-26 23:04:14.826508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.446 [2024-07-26 23:04:14.826527] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.446 [2024-07-26 23:04:14.826772] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.446 [2024-07-26 23:04:14.827017] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.446 [2024-07-26 23:04:14.827042] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.446 [2024-07-26 23:04:14.827068] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.446 [2024-07-26 23:04:14.830649] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.446 [2024-07-26 23:04:14.839949] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.446 [2024-07-26 23:04:14.840417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-26 23:04:14.840450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.446 [2024-07-26 23:04:14.840470] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.446 [2024-07-26 23:04:14.840709] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.446 [2024-07-26 23:04:14.840953] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.446 [2024-07-26 23:04:14.840979] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.446 [2024-07-26 23:04:14.840996] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.446 [2024-07-26 23:04:14.844583] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.446 [2024-07-26 23:04:14.853903] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.446 [2024-07-26 23:04:14.854358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-26 23:04:14.854390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.446 [2024-07-26 23:04:14.854409] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.446 [2024-07-26 23:04:14.854648] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.446 [2024-07-26 23:04:14.854892] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.446 [2024-07-26 23:04:14.854919] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.446 [2024-07-26 23:04:14.854936] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.446 [2024-07-26 23:04:14.858524] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.446 [2024-07-26 23:04:14.867827] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.446 [2024-07-26 23:04:14.868296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-26 23:04:14.868339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.446 [2024-07-26 23:04:14.868358] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.446 [2024-07-26 23:04:14.868598] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.446 [2024-07-26 23:04:14.868843] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.446 [2024-07-26 23:04:14.868868] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.446 [2024-07-26 23:04:14.868890] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.446 [2024-07-26 23:04:14.872476] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.446 [2024-07-26 23:04:14.881778] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.446 [2024-07-26 23:04:14.882222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-26 23:04:14.882260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.446 [2024-07-26 23:04:14.882279] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.446 [2024-07-26 23:04:14.882518] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.446 [2024-07-26 23:04:14.882762] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.446 [2024-07-26 23:04:14.882788] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.446 [2024-07-26 23:04:14.882805] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.446 [2024-07-26 23:04:14.886390] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.446 [2024-07-26 23:04:14.895690] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.446 [2024-07-26 23:04:14.896140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-26 23:04:14.896173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.446 [2024-07-26 23:04:14.896192] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.446 [2024-07-26 23:04:14.896431] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.446 [2024-07-26 23:04:14.896676] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.446 [2024-07-26 23:04:14.896702] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.446 [2024-07-26 23:04:14.896718] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.446 [2024-07-26 23:04:14.900305] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.446 [2024-07-26 23:04:14.909602] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.446 [2024-07-26 23:04:14.910028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-26 23:04:14.910066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.446 [2024-07-26 23:04:14.910088] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.446 [2024-07-26 23:04:14.910328] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.446 [2024-07-26 23:04:14.910574] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.446 [2024-07-26 23:04:14.910600] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.446 [2024-07-26 23:04:14.910617] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.446 [2024-07-26 23:04:14.914206] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.446 [2024-07-26 23:04:14.923511] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.446 [2024-07-26 23:04:14.923967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-26 23:04:14.924005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.446 [2024-07-26 23:04:14.924026] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.446 [2024-07-26 23:04:14.924275] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.446 [2024-07-26 23:04:14.924519] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.446 [2024-07-26 23:04:14.924545] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.446 [2024-07-26 23:04:14.924562] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.446 [2024-07-26 23:04:14.928148] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.446 [2024-07-26 23:04:14.937443] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.446 [2024-07-26 23:04:14.937893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-26 23:04:14.937925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.446 [2024-07-26 23:04:14.937944] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.446 [2024-07-26 23:04:14.938193] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.446 [2024-07-26 23:04:14.938438] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.446 [2024-07-26 23:04:14.938464] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.446 [2024-07-26 23:04:14.938481] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.446 [2024-07-26 23:04:14.942056] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.706 [2024-07-26 23:04:14.951367] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.706 [2024-07-26 23:04:14.951794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.706 [2024-07-26 23:04:14.951827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.706 [2024-07-26 23:04:14.951849] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.706 [2024-07-26 23:04:14.952102] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.706 [2024-07-26 23:04:14.952348] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.706 [2024-07-26 23:04:14.952374] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.706 [2024-07-26 23:04:14.952391] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.706 [2024-07-26 23:04:14.955968] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.706 [2024-07-26 23:04:14.965270] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.706 [2024-07-26 23:04:14.965757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.706 [2024-07-26 23:04:14.965790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.706 [2024-07-26 23:04:14.965810] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.706 [2024-07-26 23:04:14.966050] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.706 [2024-07-26 23:04:14.966311] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.706 [2024-07-26 23:04:14.966337] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.706 [2024-07-26 23:04:14.966354] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.706 [2024-07-26 23:04:14.969931] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.706 [2024-07-26 23:04:14.979229] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.706 [2024-07-26 23:04:14.979691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.706 [2024-07-26 23:04:14.979723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.706 [2024-07-26 23:04:14.979742] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.706 [2024-07-26 23:04:14.979981] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.706 [2024-07-26 23:04:14.980236] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.706 [2024-07-26 23:04:14.980263] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.706 [2024-07-26 23:04:14.980281] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.706 [2024-07-26 23:04:14.983864] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.706 [2024-07-26 23:04:14.993166] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.706 [2024-07-26 23:04:14.993617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.706 [2024-07-26 23:04:14.993649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.706 [2024-07-26 23:04:14.993667] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.706 [2024-07-26 23:04:14.993907] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.706 [2024-07-26 23:04:14.994162] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.706 [2024-07-26 23:04:14.994189] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.706 [2024-07-26 23:04:14.994207] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.706 [2024-07-26 23:04:14.997785] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.706 [2024-07-26 23:04:15.007086] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.706 [2024-07-26 23:04:15.007547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.706 [2024-07-26 23:04:15.007579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.706 [2024-07-26 23:04:15.007598] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.706 [2024-07-26 23:04:15.007838] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.706 [2024-07-26 23:04:15.008094] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.706 [2024-07-26 23:04:15.008120] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.706 [2024-07-26 23:04:15.008138] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.706 [2024-07-26 23:04:15.011719] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.706 [2024-07-26 23:04:15.021009] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.706 [2024-07-26 23:04:15.021458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.706 [2024-07-26 23:04:15.021491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.706 [2024-07-26 23:04:15.021510] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.706 [2024-07-26 23:04:15.021751] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.706 [2024-07-26 23:04:15.021995] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.706 [2024-07-26 23:04:15.022021] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.706 [2024-07-26 23:04:15.022038] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.706 [2024-07-26 23:04:15.025628] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.706 [2024-07-26 23:04:15.034929] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.706 [2024-07-26 23:04:15.035400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.706 [2024-07-26 23:04:15.035433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.706 [2024-07-26 23:04:15.035453] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.706 [2024-07-26 23:04:15.035692] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.706 [2024-07-26 23:04:15.035936] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.707 [2024-07-26 23:04:15.035962] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.707 [2024-07-26 23:04:15.035980] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.707 [2024-07-26 23:04:15.039567] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.707 [2024-07-26 23:04:15.048855] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.707 [2024-07-26 23:04:15.049294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-07-26 23:04:15.049327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.707 [2024-07-26 23:04:15.049347] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.707 [2024-07-26 23:04:15.049587] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.707 [2024-07-26 23:04:15.049832] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.707 [2024-07-26 23:04:15.049858] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.707 [2024-07-26 23:04:15.049875] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.707 [2024-07-26 23:04:15.053468] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.707 [2024-07-26 23:04:15.062773] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.707 [2024-07-26 23:04:15.063195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-07-26 23:04:15.063227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.707 [2024-07-26 23:04:15.063251] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.707 [2024-07-26 23:04:15.063492] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.707 [2024-07-26 23:04:15.063735] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.707 [2024-07-26 23:04:15.063760] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.707 [2024-07-26 23:04:15.063778] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.707 [2024-07-26 23:04:15.067364] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.707 [2024-07-26 23:04:15.076675] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.707 [2024-07-26 23:04:15.077181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-07-26 23:04:15.077214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.707 [2024-07-26 23:04:15.077234] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.707 [2024-07-26 23:04:15.077474] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.707 [2024-07-26 23:04:15.077720] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.707 [2024-07-26 23:04:15.077744] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.707 [2024-07-26 23:04:15.077761] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.707 [2024-07-26 23:04:15.081349] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.707 [2024-07-26 23:04:15.090641] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.707 [2024-07-26 23:04:15.091071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-07-26 23:04:15.091102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.707 [2024-07-26 23:04:15.091121] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.707 [2024-07-26 23:04:15.091360] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.707 [2024-07-26 23:04:15.091605] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.707 [2024-07-26 23:04:15.091631] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.707 [2024-07-26 23:04:15.091647] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.707 [2024-07-26 23:04:15.095233] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.707 [2024-07-26 23:04:15.104523] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.707 [2024-07-26 23:04:15.104974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-07-26 23:04:15.105005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.707 [2024-07-26 23:04:15.105023] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.707 [2024-07-26 23:04:15.105272] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.707 [2024-07-26 23:04:15.105518] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.707 [2024-07-26 23:04:15.105548] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.707 [2024-07-26 23:04:15.105566] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.707 [2024-07-26 23:04:15.109147] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.707 [2024-07-26 23:04:15.118435] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.707 [2024-07-26 23:04:15.118889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-07-26 23:04:15.118921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.707 [2024-07-26 23:04:15.118940] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.707 [2024-07-26 23:04:15.119190] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.707 [2024-07-26 23:04:15.119435] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.707 [2024-07-26 23:04:15.119460] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.707 [2024-07-26 23:04:15.119476] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.707 [2024-07-26 23:04:15.123052] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.707 [2024-07-26 23:04:15.132349] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.707 [2024-07-26 23:04:15.132795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-07-26 23:04:15.132827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.707 [2024-07-26 23:04:15.132845] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.707 [2024-07-26 23:04:15.133095] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3695503 Killed "${NVMF_APP[@]}" "$@" 00:34:22.707 [2024-07-26 23:04:15.133341] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.707 [2024-07-26 23:04:15.133365] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.707 [2024-07-26 23:04:15.133382] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.707 23:04:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:34:22.707 23:04:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:22.707 23:04:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:22.707 23:04:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:22.707 23:04:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:22.707 [2024-07-26 23:04:15.136955] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.707 23:04:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3696449 00:34:22.707 23:04:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:22.707 23:04:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3696449 00:34:22.707 23:04:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 3696449 ']' 00:34:22.707 23:04:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:22.707 23:04:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:22.707 23:04:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:22.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:22.707 23:04:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:22.707 23:04:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:22.707 [2024-07-26 23:04:15.146275] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.707 [2024-07-26 23:04:15.146733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-07-26 23:04:15.146765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.707 [2024-07-26 23:04:15.146784] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.707 [2024-07-26 23:04:15.147023] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.707 [2024-07-26 23:04:15.147279] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.707 [2024-07-26 23:04:15.147305] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.707 [2024-07-26 23:04:15.147322] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.707 [2024-07-26 23:04:15.150901] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.707 [2024-07-26 23:04:15.160223] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.707 [2024-07-26 23:04:15.160662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-07-26 23:04:15.160694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.708 [2024-07-26 23:04:15.160713] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.708 [2024-07-26 23:04:15.160952] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.708 [2024-07-26 23:04:15.161206] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.708 [2024-07-26 23:04:15.161232] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.708 [2024-07-26 23:04:15.161249] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.708 [2024-07-26 23:04:15.164719] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.708 [2024-07-26 23:04:15.173779] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.708 [2024-07-26 23:04:15.174204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.708 [2024-07-26 23:04:15.174233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.708 [2024-07-26 23:04:15.174251] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.708 [2024-07-26 23:04:15.174477] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.708 [2024-07-26 23:04:15.174677] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.708 [2024-07-26 23:04:15.174698] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.708 [2024-07-26 23:04:15.174711] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.708 [2024-07-26 23:04:15.177834] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.708 [2024-07-26 23:04:15.184335] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:34:22.708 [2024-07-26 23:04:15.184432] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:22.708 [2024-07-26 23:04:15.187210] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.708 [2024-07-26 23:04:15.187733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.708 [2024-07-26 23:04:15.187761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.708 [2024-07-26 23:04:15.187778] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.708 [2024-07-26 23:04:15.188030] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.708 [2024-07-26 23:04:15.188283] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.708 [2024-07-26 23:04:15.188307] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.708 [2024-07-26 23:04:15.188322] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.708 [2024-07-26 23:04:15.191414] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.708 [2024-07-26 23:04:15.200577] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.708 [2024-07-26 23:04:15.201047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.708 [2024-07-26 23:04:15.201083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.708 [2024-07-26 23:04:15.201101] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.708 [2024-07-26 23:04:15.201339] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.708 [2024-07-26 23:04:15.201551] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.708 [2024-07-26 23:04:15.201571] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.708 [2024-07-26 23:04:15.201584] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.708 [2024-07-26 23:04:15.204695] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.967 [2024-07-26 23:04:15.214168] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.967 [2024-07-26 23:04:15.214605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.967 [2024-07-26 23:04:15.214634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.967 [2024-07-26 23:04:15.214650] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.967 [2024-07-26 23:04:15.214897] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.967 [2024-07-26 23:04:15.215134] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.967 [2024-07-26 23:04:15.215156] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.967 [2024-07-26 23:04:15.215171] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.967 [2024-07-26 23:04:15.218193] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.967 EAL: No free 2048 kB hugepages reported on node 1 00:34:22.967 [2024-07-26 23:04:15.227792] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.967 [2024-07-26 23:04:15.228272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.967 [2024-07-26 23:04:15.228301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.967 [2024-07-26 23:04:15.228318] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.967 [2024-07-26 23:04:15.228569] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.967 [2024-07-26 23:04:15.228814] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.967 [2024-07-26 23:04:15.228839] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.967 [2024-07-26 23:04:15.228856] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.967 [2024-07-26 23:04:15.232452] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.967 [2024-07-26 23:04:15.241676] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.967 [2024-07-26 23:04:15.242149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.967 [2024-07-26 23:04:15.242178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.967 [2024-07-26 23:04:15.242197] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.967 [2024-07-26 23:04:15.242458] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.967 [2024-07-26 23:04:15.242703] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.967 [2024-07-26 23:04:15.242728] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.967 [2024-07-26 23:04:15.242744] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.967 [2024-07-26 23:04:15.246436] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.967 [2024-07-26 23:04:15.253890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:22.967 [2024-07-26 23:04:15.255535] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.967 [2024-07-26 23:04:15.255967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.967 [2024-07-26 23:04:15.256000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.967 [2024-07-26 23:04:15.256019] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.967 [2024-07-26 23:04:15.256288] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.967 [2024-07-26 23:04:15.256532] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.967 [2024-07-26 23:04:15.256557] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.967 [2024-07-26 23:04:15.256575] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.967 [2024-07-26 23:04:15.260123] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.967 [2024-07-26 23:04:15.269456] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.967 [2024-07-26 23:04:15.270125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.967 [2024-07-26 23:04:15.270167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.967 [2024-07-26 23:04:15.270189] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.967 [2024-07-26 23:04:15.270462] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.967 [2024-07-26 23:04:15.270713] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.967 [2024-07-26 23:04:15.270740] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.967 [2024-07-26 23:04:15.270760] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.967 [2024-07-26 23:04:15.274267] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.967 [2024-07-26 23:04:15.283409] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.967 [2024-07-26 23:04:15.283883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.967 [2024-07-26 23:04:15.283915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.967 [2024-07-26 23:04:15.283934] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.968 [2024-07-26 23:04:15.284198] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.968 [2024-07-26 23:04:15.284422] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.968 [2024-07-26 23:04:15.284448] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.968 [2024-07-26 23:04:15.284465] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.968 [2024-07-26 23:04:15.288040] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.968 [2024-07-26 23:04:15.297286] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.968 [2024-07-26 23:04:15.297834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.968 [2024-07-26 23:04:15.297865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.968 [2024-07-26 23:04:15.297882] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.968 [2024-07-26 23:04:15.298168] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.968 [2024-07-26 23:04:15.298397] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.968 [2024-07-26 23:04:15.298437] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.968 [2024-07-26 23:04:15.298455] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.968 [2024-07-26 23:04:15.302052] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.968 [2024-07-26 23:04:15.311243] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.968 [2024-07-26 23:04:15.311889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.968 [2024-07-26 23:04:15.311934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.968 [2024-07-26 23:04:15.311958] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.968 [2024-07-26 23:04:15.312220] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.968 [2024-07-26 23:04:15.312478] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.968 [2024-07-26 23:04:15.312505] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.968 [2024-07-26 23:04:15.312538] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.968 [2024-07-26 23:04:15.316072] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.968 [2024-07-26 23:04:15.325192] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.968 [2024-07-26 23:04:15.325604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.968 [2024-07-26 23:04:15.325633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.968 [2024-07-26 23:04:15.325649] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.968 [2024-07-26 23:04:15.325891] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.968 [2024-07-26 23:04:15.326160] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.968 [2024-07-26 23:04:15.326182] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.968 [2024-07-26 23:04:15.326196] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.968 [2024-07-26 23:04:15.329700] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.968 [2024-07-26 23:04:15.339031] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.968 [2024-07-26 23:04:15.339498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.968 [2024-07-26 23:04:15.339528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.968 [2024-07-26 23:04:15.339545] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.968 [2024-07-26 23:04:15.339803] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.968 [2024-07-26 23:04:15.340048] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.968 [2024-07-26 23:04:15.340084] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.968 [2024-07-26 23:04:15.340118] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.968 [2024-07-26 23:04:15.343606] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.968 [2024-07-26 23:04:15.345771] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:22.968 [2024-07-26 23:04:15.345818] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:22.968 [2024-07-26 23:04:15.345834] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:22.968 [2024-07-26 23:04:15.345848] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:22.968 [2024-07-26 23:04:15.345860] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:22.968 [2024-07-26 23:04:15.345972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:22.968 [2024-07-26 23:04:15.346072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:22.968 [2024-07-26 23:04:15.346079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:22.968 [2024-07-26 23:04:15.352565] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.968 [2024-07-26 23:04:15.353083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.968 [2024-07-26 23:04:15.353119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.968 [2024-07-26 23:04:15.353140] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.968 [2024-07-26 23:04:15.353387] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.968 [2024-07-26 23:04:15.353616] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.968 [2024-07-26 23:04:15.353639] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.968 [2024-07-26 23:04:15.353656] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.968 [2024-07-26 23:04:15.356832] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.968 [2024-07-26 23:04:15.366224] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.968 [2024-07-26 23:04:15.366844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.968 [2024-07-26 23:04:15.366885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.968 [2024-07-26 23:04:15.366907] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.968 [2024-07-26 23:04:15.367183] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.968 [2024-07-26 23:04:15.367404] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.968 [2024-07-26 23:04:15.367441] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.968 [2024-07-26 23:04:15.367460] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.968 [2024-07-26 23:04:15.370649] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.968 [2024-07-26 23:04:15.379846] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.968 [2024-07-26 23:04:15.380456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.968 [2024-07-26 23:04:15.380498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.968 [2024-07-26 23:04:15.380520] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.968 [2024-07-26 23:04:15.380767] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.968 [2024-07-26 23:04:15.380981] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.968 [2024-07-26 23:04:15.381003] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.968 [2024-07-26 23:04:15.381021] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.968 [2024-07-26 23:04:15.384224] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.968 [2024-07-26 23:04:15.393517] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.968 [2024-07-26 23:04:15.394124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.968 [2024-07-26 23:04:15.394166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.968 [2024-07-26 23:04:15.394188] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.968 [2024-07-26 23:04:15.394443] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.968 [2024-07-26 23:04:15.394657] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.968 [2024-07-26 23:04:15.394679] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.968 [2024-07-26 23:04:15.394709] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.968 [2024-07-26 23:04:15.397895] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.968 [2024-07-26 23:04:15.407224] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.968 [2024-07-26 23:04:15.407876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.968 [2024-07-26 23:04:15.407918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.968 [2024-07-26 23:04:15.407938] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.968 [2024-07-26 23:04:15.408174] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.968 [2024-07-26 23:04:15.408415] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.969 [2024-07-26 23:04:15.408438] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.969 [2024-07-26 23:04:15.408472] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.969 [2024-07-26 23:04:15.411800] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.969 [2024-07-26 23:04:15.420908] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.969 [2024-07-26 23:04:15.421431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.969 [2024-07-26 23:04:15.421469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.969 [2024-07-26 23:04:15.421492] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.969 [2024-07-26 23:04:15.421745] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.969 [2024-07-26 23:04:15.421958] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.969 [2024-07-26 23:04:15.421981] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.969 [2024-07-26 23:04:15.422001] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.969 [2024-07-26 23:04:15.425266] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.969 [2024-07-26 23:04:15.434598] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.969 [2024-07-26 23:04:15.435011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.969 [2024-07-26 23:04:15.435040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.969 [2024-07-26 23:04:15.435057] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.969 [2024-07-26 23:04:15.435285] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.969 [2024-07-26 23:04:15.435515] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.969 [2024-07-26 23:04:15.435538] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.969 [2024-07-26 23:04:15.435553] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.969 [2024-07-26 23:04:15.438761] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.969 [2024-07-26 23:04:15.448201] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.969 [2024-07-26 23:04:15.448645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.969 [2024-07-26 23:04:15.448675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.969 [2024-07-26 23:04:15.448692] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.969 [2024-07-26 23:04:15.448922] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.969 [2024-07-26 23:04:15.449164] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.969 [2024-07-26 23:04:15.449188] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.969 [2024-07-26 23:04:15.449204] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.969 [2024-07-26 23:04:15.452473] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.969 23:04:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:22.969 23:04:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:34:22.969 23:04:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:22.969 23:04:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:22.969 23:04:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:22.969 [2024-07-26 23:04:15.461720] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.969 [2024-07-26 23:04:15.462128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.969 [2024-07-26 23:04:15.462158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:22.969 [2024-07-26 23:04:15.462175] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:22.969 [2024-07-26 23:04:15.462410] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:22.969 [2024-07-26 23:04:15.462634] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.969 [2024-07-26 23:04:15.462656] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.969 [2024-07-26 23:04:15.462670] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.969 [2024-07-26 23:04:15.466015] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.229 [2024-07-26 23:04:15.475520] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.229 [2024-07-26 23:04:15.475947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.229 [2024-07-26 23:04:15.475982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:23.229 [2024-07-26 23:04:15.475999] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:23.229 [2024-07-26 23:04:15.476223] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:23.229 [2024-07-26 23:04:15.476457] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.229 [2024-07-26 23:04:15.476479] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.229 [2024-07-26 23:04:15.476495] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.229 23:04:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:23.229 23:04:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:23.229 23:04:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.229 23:04:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:23.229 [2024-07-26 23:04:15.479755] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.229 [2024-07-26 23:04:15.483942] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:23.229 [2024-07-26 23:04:15.489102] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.229 23:04:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.229 [2024-07-26 23:04:15.489521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.229 [2024-07-26 23:04:15.489550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:23.229 [2024-07-26 23:04:15.489567] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:23.229 23:04:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:23.229 23:04:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.229 [2024-07-26 23:04:15.489783] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:23.229 23:04:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:23.229 [2024-07-26 23:04:15.490030] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.229 [2024-07-26 23:04:15.490053] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.229 [2024-07-26 23:04:15.490077] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.229 [2024-07-26 23:04:15.493343] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.229 [2024-07-26 23:04:15.502552] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.229 [2024-07-26 23:04:15.502906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.229 [2024-07-26 23:04:15.502933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:23.229 [2024-07-26 23:04:15.502949] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:23.229 [2024-07-26 23:04:15.503187] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:23.229 [2024-07-26 23:04:15.503438] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.229 [2024-07-26 23:04:15.503460] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.229 [2024-07-26 23:04:15.503474] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.229 [2024-07-26 23:04:15.506653] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.229 [2024-07-26 23:04:15.516014] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.229 [2024-07-26 23:04:15.516704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.229 [2024-07-26 23:04:15.516745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:23.229 [2024-07-26 23:04:15.516766] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:23.229 [2024-07-26 23:04:15.517021] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:23.229 [2024-07-26 23:04:15.517265] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.229 [2024-07-26 23:04:15.517289] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.229 [2024-07-26 23:04:15.517329] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.229 [2024-07-26 23:04:15.520548] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.229 Malloc0 00:34:23.229 23:04:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.229 23:04:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:23.229 23:04:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.229 23:04:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:23.229 [2024-07-26 23:04:15.529783] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.229 [2024-07-26 23:04:15.530283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.230 [2024-07-26 23:04:15.530315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:23.230 [2024-07-26 23:04:15.530334] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:23.230 [2024-07-26 23:04:15.530581] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:23.230 [2024-07-26 23:04:15.530789] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.230 [2024-07-26 23:04:15.530813] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.230 [2024-07-26 23:04:15.530829] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.230 23:04:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.230 23:04:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:23.230 23:04:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.230 23:04:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:23.230 [2024-07-26 23:04:15.534068] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.230 23:04:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.230 23:04:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:23.230 23:04:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.230 23:04:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:23.230 [2024-07-26 23:04:15.543292] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.230 [2024-07-26 23:04:15.543795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.230 [2024-07-26 23:04:15.543825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570e70 with addr=10.0.0.2, port=4420 00:34:23.230 [2024-07-26 23:04:15.543842] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570e70 is same with the state(5) to be set 00:34:23.230 [2024-07-26 23:04:15.544109] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570e70 (9): Bad file descriptor 00:34:23.230 [2024-07-26 23:04:15.544329] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.230 [2024-07-26 23:04:15.544353] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.230 [2024-07-26 23:04:15.544368] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.230 [2024-07-26 23:04:15.545520] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:23.230 [2024-07-26 23:04:15.547631] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.230 23:04:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.230 23:04:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3695790 00:34:23.230 [2024-07-26 23:04:15.556765] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.230 [2024-07-26 23:04:15.633550] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:33.230 00:34:33.230 Latency(us) 00:34:33.230 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:33.230 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:33.230 Verification LBA range: start 0x0 length 0x4000 00:34:33.230 Nvme1n1 : 15.01 6889.36 26.91 8579.45 0.00 8249.87 1134.74 22427.88 00:34:33.230 =================================================================================================================== 00:34:33.230 Total : 6889.36 26.91 8579.45 0.00 8249.87 1134.74 22427.88 00:34:33.230 23:04:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:34:33.230 23:04:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:33.230 23:04:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.230 23:04:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:33.230 23:04:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.230 23:04:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:34:33.230 23:04:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:34:33.231 23:04:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:33.231 23:04:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:34:33.231 23:04:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:33.231 23:04:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:34:33.231 23:04:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:33.231 23:04:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:33.231 rmmod nvme_tcp 00:34:33.231 rmmod nvme_fabrics 00:34:33.231 rmmod nvme_keyring 00:34:33.231 23:04:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:33.231 23:04:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:34:33.231 23:04:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:34:33.231 23:04:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3696449 ']' 00:34:33.231 23:04:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3696449 00:34:33.231 23:04:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 3696449 ']' 00:34:33.231 23:04:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 3696449 00:34:33.231 23:04:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:34:33.231 23:04:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:33.231 23:04:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3696449 00:34:33.231 23:04:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:33.231 23:04:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:33.231 23:04:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3696449' 00:34:33.231 killing process with pid 3696449 00:34:33.231 23:04:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 3696449 00:34:33.231 23:04:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 3696449 00:34:33.231 23:04:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:33.231 23:04:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:33.231 23:04:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:33.231 23:04:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:33.231 23:04:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:33.231 23:04:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:33.231 23:04:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:33.231 23:04:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:35.148 23:04:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:35.148 00:34:35.148 real 0m22.327s 00:34:35.148 user 1m0.181s 00:34:35.148 sys 0m4.096s 00:34:35.148 23:04:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:35.148 23:04:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:35.148 ************************************ 00:34:35.148 END TEST nvmf_bdevperf 00:34:35.148 ************************************ 00:34:35.148 23:04:27 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:35.148 23:04:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:35.148 23:04:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:35.148 23:04:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:35.148 ************************************ 00:34:35.148 START TEST nvmf_target_disconnect 00:34:35.148 ************************************ 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:35.148 * Looking for test storage... 00:34:35.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:34:35.148 23:04:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:37.051 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:37.051 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:37.051 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:37.051 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:37.051 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:37.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:37.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:34:37.051 00:34:37.051 --- 10.0.0.2 ping statistics --- 00:34:37.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:37.051 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:34:37.052 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:37.052 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:37.052 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:34:37.052 00:34:37.052 --- 10.0.0.1 ping statistics --- 00:34:37.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:37.052 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:34:37.052 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:37.052 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:34:37.052 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:37.052 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:37.052 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:37.052 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:37.052 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:37.052 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:37.052 23:04:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:37.052 23:04:29 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:37.052 23:04:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:37.052 23:04:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:37.052 23:04:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:37.052 ************************************ 00:34:37.052 START TEST nvmf_target_disconnect_tc1 00:34:37.052 ************************************ 00:34:37.052 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:34:37.052 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:37.052 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:34:37.052 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:37.052 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:37.052 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:37.052 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:37.052 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:37.052 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:37.052 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:37.052 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:37.052 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:34:37.052 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:37.052 EAL: No free 2048 kB hugepages reported on node 1 00:34:37.052 [2024-07-26 23:04:29.546233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.052 [2024-07-26 23:04:29.546332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cd0520 with addr=10.0.0.2, port=4420 00:34:37.052 [2024-07-26 23:04:29.546364] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:37.052 [2024-07-26 23:04:29.546402] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:37.052 [2024-07-26 23:04:29.546416] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:34:37.052 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:37.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:37.311 Initializing NVMe Controllers 00:34:37.311 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:34:37.311 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:37.311 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:37.311 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:37.311 00:34:37.311 real 0m0.095s 00:34:37.311 user 0m0.034s 00:34:37.311 sys 0m0.061s 00:34:37.311 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:37.311 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:37.311 ************************************ 00:34:37.311 END TEST nvmf_target_disconnect_tc1 00:34:37.311 ************************************ 00:34:37.311 23:04:29 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:37.311 23:04:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:37.311 23:04:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:37.311 23:04:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:37.311 ************************************ 00:34:37.311 START TEST nvmf_target_disconnect_tc2 00:34:37.311 ************************************ 00:34:37.311 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:34:37.311 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:34:37.311 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:37.311 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:37.311 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:37.311 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:37.311 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3699598 00:34:37.311 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:37.311 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3699598 00:34:37.311 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3699598 ']' 00:34:37.311 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:37.311 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:37.311 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:37.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:37.311 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:37.311 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:37.311 [2024-07-26 23:04:29.658485] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:34:37.311 [2024-07-26 23:04:29.658571] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:37.311 EAL: No free 2048 kB hugepages reported on node 1 00:34:37.311 [2024-07-26 23:04:29.724982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:37.311 [2024-07-26 23:04:29.812631] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:37.311 [2024-07-26 23:04:29.812701] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:37.311 [2024-07-26 23:04:29.812723] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:37.311 [2024-07-26 23:04:29.812734] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:37.311 [2024-07-26 23:04:29.812744] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:37.311 [2024-07-26 23:04:29.812831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:37.311 [2024-07-26 23:04:29.812911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:34:37.311 [2024-07-26 23:04:29.812914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:37.311 [2024-07-26 23:04:29.812855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:37.569 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:37.569 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:34:37.569 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:37.569 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:37.569 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:37.569 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:37.569 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:37.569 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.569 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:37.569 Malloc0 00:34:37.569 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.569 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:37.569 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.569 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:37.569 [2024-07-26 23:04:29.977472] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:37.569 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.569 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:37.569 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.569 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:37.569 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.569 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:37.569 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.569 23:04:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:37.569 23:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.569 23:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:37.569 23:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.569 23:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:37.569 [2024-07-26 23:04:30.005723] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:37.569 23:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.569 23:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:37.569 23:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.569 23:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:37.569 23:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.569 23:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3699625 00:34:37.569 23:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:37.569 23:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:37.569 EAL: No free 2048 kB hugepages reported on node 1 00:34:40.123 23:04:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3699598 00:34:40.124 23:04:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 [2024-07-26 23:04:32.031095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 [2024-07-26 23:04:32.031420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 [2024-07-26 23:04:32.031742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Read completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 Write completed with error (sct=0, sc=8) 00:34:40.124 starting I/O failed 00:34:40.124 [2024-07-26 23:04:32.032054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.124 [2024-07-26 23:04:32.032259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.124 [2024-07-26 23:04:32.032300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.124 qpair failed and we were unable to recover it. 00:34:40.124 [2024-07-26 23:04:32.032516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.124 [2024-07-26 23:04:32.032557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.124 qpair failed and we were unable to recover it. 00:34:40.124 [2024-07-26 23:04:32.032810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.124 [2024-07-26 23:04:32.032853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.124 qpair failed and we were unable to recover it. 00:34:40.124 [2024-07-26 23:04:32.033070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.124 [2024-07-26 23:04:32.033115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.124 qpair failed and we were unable to recover it. 00:34:40.124 [2024-07-26 23:04:32.033253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.124 [2024-07-26 23:04:32.033280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.124 qpair failed and we were unable to recover it. 00:34:40.124 [2024-07-26 23:04:32.033487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.124 [2024-07-26 23:04:32.033513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.124 qpair failed and we were unable to recover it. 00:34:40.124 [2024-07-26 23:04:32.033741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.124 [2024-07-26 23:04:32.033770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.124 qpair failed and we were unable to recover it. 00:34:40.124 [2024-07-26 23:04:32.033925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.124 [2024-07-26 23:04:32.033954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.124 qpair failed and we were unable to recover it. 00:34:40.124 [2024-07-26 23:04:32.034228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.124 [2024-07-26 23:04:32.034255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.124 qpair failed and we were unable to recover it. 00:34:40.124 [2024-07-26 23:04:32.034393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.124 [2024-07-26 23:04:32.034429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.124 qpair failed and we were unable to recover it. 00:34:40.124 [2024-07-26 23:04:32.034631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.124 [2024-07-26 23:04:32.034659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.124 qpair failed and we were unable to recover it. 00:34:40.124 [2024-07-26 23:04:32.034878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.124 [2024-07-26 23:04:32.034905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.124 qpair failed and we were unable to recover it. 00:34:40.124 [2024-07-26 23:04:32.035069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.124 [2024-07-26 23:04:32.035096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.124 qpair failed and we were unable to recover it. 00:34:40.124 [2024-07-26 23:04:32.035247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.124 [2024-07-26 23:04:32.035278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.124 qpair failed and we were unable to recover it. 00:34:40.124 [2024-07-26 23:04:32.035451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.124 [2024-07-26 23:04:32.035480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.124 qpair failed and we were unable to recover it. 00:34:40.124 [2024-07-26 23:04:32.035851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.124 [2024-07-26 23:04:32.035905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.124 qpair failed and we were unable to recover it. 00:34:40.124 [2024-07-26 23:04:32.036104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.124 [2024-07-26 23:04:32.036131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.124 qpair failed and we were unable to recover it. 00:34:40.124 [2024-07-26 23:04:32.036315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.124 [2024-07-26 23:04:32.036341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.124 qpair failed and we were unable to recover it. 00:34:40.124 [2024-07-26 23:04:32.036591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.124 [2024-07-26 23:04:32.036617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.124 qpair failed and we were unable to recover it. 00:34:40.124 [2024-07-26 23:04:32.036827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.124 [2024-07-26 23:04:32.036852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.124 qpair failed and we were unable to recover it. 00:34:40.124 [2024-07-26 23:04:32.037140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.124 [2024-07-26 23:04:32.037167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.124 qpair failed and we were unable to recover it. 00:34:40.124 [2024-07-26 23:04:32.037319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.124 [2024-07-26 23:04:32.037345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.124 qpair failed and we were unable to recover it. 00:34:40.124 [2024-07-26 23:04:32.037509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.124 [2024-07-26 23:04:32.037535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.124 qpair failed and we were unable to recover it. 00:34:40.124 [2024-07-26 23:04:32.037799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.124 [2024-07-26 23:04:32.037826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.124 qpair failed and we were unable to recover it. 00:34:40.124 [2024-07-26 23:04:32.038007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.038033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.038229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.038255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.038401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.038427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.038568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.038594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.038768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.038795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.038967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.038993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.039168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.039195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.039366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.039392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.039579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.039604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.039863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.039913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.040117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.040144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.040341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.040370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.040605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.040650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.040865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.040891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.041070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.041096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.041241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.041266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.041427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.041457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.041653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.041679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.041885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.041911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.042107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.042134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.042278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.042304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.042484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.042510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.042688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.042730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.042946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.042974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.043151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.043178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.043357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.043387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.043627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.043653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.043813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.043839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.044005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.044031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.044189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.044217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.044364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.044390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.044595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.044621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.045261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.045287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.045460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.045486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.045656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.045682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.045826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.045853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.046019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.046045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.046206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.046232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.046379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.046406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.046580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.046624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.046824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.046866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.047068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.047095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.047293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.047320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.047500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.047526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.047728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.047755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.047929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.047955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.048117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.048144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.048308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.048334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.048479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.048505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.048701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.048727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.048896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.048922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.049087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.049114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.049283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.049309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.049484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.049510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.049646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.049672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.049821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.049847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.050011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.050037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.050210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.050237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.050413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.050440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.050588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.050614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.050807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.050833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.051038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.051072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.051219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.051245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.051421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.051447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.051638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.051664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.051865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.051892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.052086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.052115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.052275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.125 [2024-07-26 23:04:32.052302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.125 qpair failed and we were unable to recover it. 00:34:40.125 [2024-07-26 23:04:32.052473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.052499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.052693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.052719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.052887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.052913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.053086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.053116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.053287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.053313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.053525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.053550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.053720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.053747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.053923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.053951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.054103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.054130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.054305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.054331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.054541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.054568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.054763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.054789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.054981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.055010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.055227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.055253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.055444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.055473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.055666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.055693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.055845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.055875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.056056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.056095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.056305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.056332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.056487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.056513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.056678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.056705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.056912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.056939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.057136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.057163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.057362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.057388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.057588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.057614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.057786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.057812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.058029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.058069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.058235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.058265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.058482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.058508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.058685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.058712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.058924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.058950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.059110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.059137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.059325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.059355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.059563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.059589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.059729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.059755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.059931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.059956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.060126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.060153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.060319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.060346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.060518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.060544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.060709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.060735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.060905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.060932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.061103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.061130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.061297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.061324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.061482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.061512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.061660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.061686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.061856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.061882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.062076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.062103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.062336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.062364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.062537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.062564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.062734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.062760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.062913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.062939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.063133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.063162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.063353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.063380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.063601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.063630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.063804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.063830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.064031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.064072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.064240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.064266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.064472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.064498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.064644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.064670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.064854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.064883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.065070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.065100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.126 [2024-07-26 23:04:32.065296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.126 [2024-07-26 23:04:32.065322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.126 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.065481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.065507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.065673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.065699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.065849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.065875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.066038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.066100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.066291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.066317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.066512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.066538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.066770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.066796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.067001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.067027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.067293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.067323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.067506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.067535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.067701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.067730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.067924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.067950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.068130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.068157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.068357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.068383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.068530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.068557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.068755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.068782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.069003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.069032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.069267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.069293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.069479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.069505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.069655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.069681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.069821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.069848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.070039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.070079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.070285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.070312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.070501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.070527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.070718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.070745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.070939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.070969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.071164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.071191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.071380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.071409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.071602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.071628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.071800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.071827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.072016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.072057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.072284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.072311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.072483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.072509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.072678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.072704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.072878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.072905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.073081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.073108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.073287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.073314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.073493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.073520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.073668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.073695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.073924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.073954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.074198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.074228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.074436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.074463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.074604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.074631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.074815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.074859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.075057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.075098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.075248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.075276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.075468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.075495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.075667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.075694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.075860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.075886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.076113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.076143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.076340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.076377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.076527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.076553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.076743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.076773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.076967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.076993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.077178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.077205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.077419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.077448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.077618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.077645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.077839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.077866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.078027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.078066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.078208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.078235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.078383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.078409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.078572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.078599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.078739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.078765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.078941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.078967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.079204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.079230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.079370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.079397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.079576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.079602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.079766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.079793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.079965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.127 [2024-07-26 23:04:32.079992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.127 qpair failed and we were unable to recover it. 00:34:40.127 [2024-07-26 23:04:32.080154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.080181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.080395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.080424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.080586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.080612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.080801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.080827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.080970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.080996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.081178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.081205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.081419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.081448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.081640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.081673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.081866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.081892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.082035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.082069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.082240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.082266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.082422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.082448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.082613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.082639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.082779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.082805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.083006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.083032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.083228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.083254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.083450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.083480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.083674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.083700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.083909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.083938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.084112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.084139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.084308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.084334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.084544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.084570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.084708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.084734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.084932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.084959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.085150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.085181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.085344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.085373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.085569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.085595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.085790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.085816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.085994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.086020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.086213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.086239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.086463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.086492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.086710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.086736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.086904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.086930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.087105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.087132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.087330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.087367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.087539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.087565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.087782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.087812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.088035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.088071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.088237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.088264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.088416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.088443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.088639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.088668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.088863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.088897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.089046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.089080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.089265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.089294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.089489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.089515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.089716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.089742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.089935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.089965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.090160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.090187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.090394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.090421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.090571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.090598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.090766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.090793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.090965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.090992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.091165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.091192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.091359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.091385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.091578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.091604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.091801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.091827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.091958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.091985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.092183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.092210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.092410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.092437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.092588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.092615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.092780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.092807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.092968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.092995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.093149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.093176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.093343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.093370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.093513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.093539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.093720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.093747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.093949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.128 [2024-07-26 23:04:32.093976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.128 qpair failed and we were unable to recover it. 00:34:40.128 [2024-07-26 23:04:32.094161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.129 [2024-07-26 23:04:32.094188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.129 qpair failed and we were unable to recover it. 00:34:40.129 [2024-07-26 23:04:32.094362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.129 [2024-07-26 23:04:32.094388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.129 qpair failed and we were unable to recover it. 00:34:40.129 [2024-07-26 23:04:32.094528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.129 [2024-07-26 23:04:32.094554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.129 qpair failed and we were unable to recover it. 00:34:40.129 [2024-07-26 23:04:32.094720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.129 [2024-07-26 23:04:32.094746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.129 qpair failed and we were unable to recover it. 00:34:40.129 [2024-07-26 23:04:32.094941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.129 [2024-07-26 23:04:32.094968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.129 qpair failed and we were unable to recover it. 00:34:40.129 [2024-07-26 23:04:32.095143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.129 [2024-07-26 23:04:32.095170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.129 qpair failed and we were unable to recover it. 00:34:40.129 [2024-07-26 23:04:32.095315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.129 [2024-07-26 23:04:32.095342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.129 qpair failed and we were unable to recover it. 00:34:40.129 [2024-07-26 23:04:32.095507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.129 [2024-07-26 23:04:32.095534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.129 qpair failed and we were unable to recover it. 00:34:40.129 [2024-07-26 23:04:32.095738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.129 [2024-07-26 23:04:32.095765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.129 qpair failed and we were unable to recover it. 00:34:40.129 [2024-07-26 23:04:32.095935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.129 [2024-07-26 23:04:32.095962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.129 qpair failed and we were unable to recover it. 00:34:40.129 [2024-07-26 23:04:32.096158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.129 [2024-07-26 23:04:32.096186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.129 qpair failed and we were unable to recover it. 00:34:40.129 [2024-07-26 23:04:32.096353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.129 [2024-07-26 23:04:32.096379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.129 qpair failed and we were unable to recover it. 00:34:40.129 [2024-07-26 23:04:32.096544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.129 [2024-07-26 23:04:32.096571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.129 qpair failed and we were unable to recover it. 00:34:40.129 [2024-07-26 23:04:32.096734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.129 [2024-07-26 23:04:32.096760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.129 qpair failed and we were unable to recover it. 00:34:40.129 [2024-07-26 23:04:32.096965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.129 [2024-07-26 23:04:32.096991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.129 qpair failed and we were unable to recover it. 00:34:40.129 [2024-07-26 23:04:32.097171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.129 [2024-07-26 23:04:32.097198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.129 qpair failed and we were unable to recover it. 00:34:40.129 [2024-07-26 23:04:32.097345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.129 [2024-07-26 23:04:32.097372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.129 qpair failed and we were unable to recover it. 00:34:40.129 [2024-07-26 23:04:32.097517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.129 [2024-07-26 23:04:32.097543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.129 qpair failed and we were unable to recover it. 00:34:40.129 [2024-07-26 23:04:32.097738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.129 [2024-07-26 23:04:32.097765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.129 qpair failed and we were unable to recover it. 00:34:40.129 [2024-07-26 23:04:32.097899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.129 [2024-07-26 23:04:32.097925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.129 qpair failed and we were unable to recover it. 00:34:40.129 [2024-07-26 23:04:32.098128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.129 [2024-07-26 23:04:32.098155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.129 qpair failed and we were unable to recover it. 00:34:40.129 [2024-07-26 23:04:32.098328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.129 [2024-07-26 23:04:32.098358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.129 qpair failed and we were unable to recover it. 00:34:40.129 [2024-07-26 23:04:32.098558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.129 [2024-07-26 23:04:32.098585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.129 qpair failed and we were unable to recover it. 00:34:40.129 [2024-07-26 23:04:32.098734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.129 [2024-07-26 23:04:32.098761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.129 qpair failed and we were unable to recover it. 00:34:40.129 [2024-07-26 23:04:32.098907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.129 [2024-07-26 23:04:32.098934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.129 qpair failed and we were unable to recover it. 00:34:40.129 [2024-07-26 23:04:32.099106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.129 [2024-07-26 23:04:32.099133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.129 qpair failed and we were unable to recover it. 00:34:40.129 [2024-07-26 23:04:32.099266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.129 [2024-07-26 23:04:32.099293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.129 qpair failed and we were unable to recover it. 00:34:40.129 [2024-07-26 23:04:32.099466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.129 [2024-07-26 23:04:32.099492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.129 qpair failed and we were unable to recover it. 00:34:40.129 [2024-07-26 23:04:32.099639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.129 [2024-07-26 23:04:32.099665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.129 qpair failed and we were unable to recover it. 00:34:40.129 [2024-07-26 23:04:32.099804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.129 [2024-07-26 23:04:32.099831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.129 qpair failed and we were unable to recover it. 00:34:40.129 [2024-07-26 23:04:32.100035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.129 [2024-07-26 23:04:32.100078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.129 qpair failed and we were unable to recover it. 00:34:40.129 [2024-07-26 23:04:32.100224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.129 [2024-07-26 23:04:32.100250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.129 qpair failed and we were unable to recover it. 00:34:40.129 [2024-07-26 23:04:32.100453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.129 [2024-07-26 23:04:32.100479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.129 qpair failed and we were unable to recover it. 00:34:40.129 [2024-07-26 23:04:32.100648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.129 [2024-07-26 23:04:32.100675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.129 qpair failed and we were unable to recover it. 00:34:40.129 [2024-07-26 23:04:32.100834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.129 [2024-07-26 23:04:32.100861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.129 qpair failed and we were unable to recover it. 00:34:40.129 [2024-07-26 23:04:32.100998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.129 [2024-07-26 23:04:32.101028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.129 qpair failed and we were unable to recover it. 00:34:40.129 [2024-07-26 23:04:32.101219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.101246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.101421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.101447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.101593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.101619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.101789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.101815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.101959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.101985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.102161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.102188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.102383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.102420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.102590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.102616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.102839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.102868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.103086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.103113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.103309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.103335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.103507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.103533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.103676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.103702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.103884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.103910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.104073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.104099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.104247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.104274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.104410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.104436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.104596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.104622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.104796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.104822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.104988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.105014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.105191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.105218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.105392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.105418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.105592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.105618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.105812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.105838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.105988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.106015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.106171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.106198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.106372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.106403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.106573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.106599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.106769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.106795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.106989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.107015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.107195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.107222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.107394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.107430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.107647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.107676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.107845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.107874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.108072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.108099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.108275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.108301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.108448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.108474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.108671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.108697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.108841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.108867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.109008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.109034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.109246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.109273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.109495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.109524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.109712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.109741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.109927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.109953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.110180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.110209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.110430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.110459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.110648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.110674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.110848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.110875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.111080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.111110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.111327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.111364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.111584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.111613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.111796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.111825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.112048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.112083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.112233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.112263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.112433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.112459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.112655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.112681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.112871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.112900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.113089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.113119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.113349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.113376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.113534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.113563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.113755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.113784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.113980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.114006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.114206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.114233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.114418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.114448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.130 [2024-07-26 23:04:32.114637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.130 [2024-07-26 23:04:32.114663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.130 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.114887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.114916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.115140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.115166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.115350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.115376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.115532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.115562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.115751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.115780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.115973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.115998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.116156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.116183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.116393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.116422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.116645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.116671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.116821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.116847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.117008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.117033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.117225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.117252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.117468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.117497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.117668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.117695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.117867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.117893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.118034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.118073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.118251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.118277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.118449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.118475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.118687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.118716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.118901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.118929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.119123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.119150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.119326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.119352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.119535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.119561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.119753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.119780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.119953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.119979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.120153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.120180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.120354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.120381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.120551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.120577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.120752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.120778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.120934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.120960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.121146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.121173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.122826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.122863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.123094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.123126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.123359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.123386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.123595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.123621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.123800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.123826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.123999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.124024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.124219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.124246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.124452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.124480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.124697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.124723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.124874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.124902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.125098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.125128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.125347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.125376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.125530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.125557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.125728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.125753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.125902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.125928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.126131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.126158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.126361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.126390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.126586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.126612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.126780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.126806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.126956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.126982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.127179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.127205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.127405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.127431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.127618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.127647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.127871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.127898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.128073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.128099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.128274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.128304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.128485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.128511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.128729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.128757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.128975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.129001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.129190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.129216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.129410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.129439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.129661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.129687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.129856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.129881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.130024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.130068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.130235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.130262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.130402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.130428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.130596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.130622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.130818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.130844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.131 [2024-07-26 23:04:32.131019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.131 [2024-07-26 23:04:32.131056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.131 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.131270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.131314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.131526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.131555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.131767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.131793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.131993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.132019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.132255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.132285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.132455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.132481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.132644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.132673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.132850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.132879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.133051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.133086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.133290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.133332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.133533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.133559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.133698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.133724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.133920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.133946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.134090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.134136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.134298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.134324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.134488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.134514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.134709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.134738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.134929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.134956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.135144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.135174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.135366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.135392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.135526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.135552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.135747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.135776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.135966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.135991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.136158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.136184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.136345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.136374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.136564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.136592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.136774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.136800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.136994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.137021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.137188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.137216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.137384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.137410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.137594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.137621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.137773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.137801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.137963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.137989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.138195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.138224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.138422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.138448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.138610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.138636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.138794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.138821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.138982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.139008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.139212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.139239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.139411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.139437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.139641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.139669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.139860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.139886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.140072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.140100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.140287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.140313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.140463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.140490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.140631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.140657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.140835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.140863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.141051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.141088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.141276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.141302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.141489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.141516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.141667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.141693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.141872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.141899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.142046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.142084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.142275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.142301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.142489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.142516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.142677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.142704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.142889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.142915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.143128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.143155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.143353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.143379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.143551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.143578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.143747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.132 [2024-07-26 23:04:32.143773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.132 qpair failed and we were unable to recover it. 00:34:40.132 [2024-07-26 23:04:32.143944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.143970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.144136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.144163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.144336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.144371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.144565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.144591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.144759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.144786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.144950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.144976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.145157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.145184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.145332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.145361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.145530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.145557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.145727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.145753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.145949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.145975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.146116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.146142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.146280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.146306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.146482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.146508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.146703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.146729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.146926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.146952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.147160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.147187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.147363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.147389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.147533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.147559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.147735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.147761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.147934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.147964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.148135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.148161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.148335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.148364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.148509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.148536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.148730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.148756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.148950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.148976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.149149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.149176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.149346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.149372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.149533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.149559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.149725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.149751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.149948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.149975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.150145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.150172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.150314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.150340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.150490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.150516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.150715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.150741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.150880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.150906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.151065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.151093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.151260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.151287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.151459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.151485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.151659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.151685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.151875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.151902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.152105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.152132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.152324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.152361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.152532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.152558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.152757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.152783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.152957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.152983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.153137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.153164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.153337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.153374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.153521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.153548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.153723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.153749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.153916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.153942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.154134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.154161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.154325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.154360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.154529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.154555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.154698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.154724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.154891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.154917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.155064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.155091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.155235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.155261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.155432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.155458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.155634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.155660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.155827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.155853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.156025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.156069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.156233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.156260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.156402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.156428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.156625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.156651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.156796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.156822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.157009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.157035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.157256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.157282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.133 [2024-07-26 23:04:32.157454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.133 [2024-07-26 23:04:32.157480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.133 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.157676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.157702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.157868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.157896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.158034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.158072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.158213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.158240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.158382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.158409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.158556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.158586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.158752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.158778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.158922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.158948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.159163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.159190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.159327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.159360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.159528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.159554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.159748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.159775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.159954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.159980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.160160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.160187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.160354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.160380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.160527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.160553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.160727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.160753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.160914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.160940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.161132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.161158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.161366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.161392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.161541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.161568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.161765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.161791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.161958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.161984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.162153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.162180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.162355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.162381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.162544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.162570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.162774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.162799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.162951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.162977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.163184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.163210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.163353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.163379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.163525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.163551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.163722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.163748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.163923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.163949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.164101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.164128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.164299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.164325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.164484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.164510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.164663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.164689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.164859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.164885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.165057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.165089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.165239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.165265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.165438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.165466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.165634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.165660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.165826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.165852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.166047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.166081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.166252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.166278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.166450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.166476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.166656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.166698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.166882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.166937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.167144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.167172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.167350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.167378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.167538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.167583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.167792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.167820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.168018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.168046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.168223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.168250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.168422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.168448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.168723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.168769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.168994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.169020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.169198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.169225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.169434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.169461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.169625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.169651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.169843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.169872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.170063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.170090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.170300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.170326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.170496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.134 [2024-07-26 23:04:32.170526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.134 qpair failed and we were unable to recover it. 00:34:40.134 [2024-07-26 23:04:32.170813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.170874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.171087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.171114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.171310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.171336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.171506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.171536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.171716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.171745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.171943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.171970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.172148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.172177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.172351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.172378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.172568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.172613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.172855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.172882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.173046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.173079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.173223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.173249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.173460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.173486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.173626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.173652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.173885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.173911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.174088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.174117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.174300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.174328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.174524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.174573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.174827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.174854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.175057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.175092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.175266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.175296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.175468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.175517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.175695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.175739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.175914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.175942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.176108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.176136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.176283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.176310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.176519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.176546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.176768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.176819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.176999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.177027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.177204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.177232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.177403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.177433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.177629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.177675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.177860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.177889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.178079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.178106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.178256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.178284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.178451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.178477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.178627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.178656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.178835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.178862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.179069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.179099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.179272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.179306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.179446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.179473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.179690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.179721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.179888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.179915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.180092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.180120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.180323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.180367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.180723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.180787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.180985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.181011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.181191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.181217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.181411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.181440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.181654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.181687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.181874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.181903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.182097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.182126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.182364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.182407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.182607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.182634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.182804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.182831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.183031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.183064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.183248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.183275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.183463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.183491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.183686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.183713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.183862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.183890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.184093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.184120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.184269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.184295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.184471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.184498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.184704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.184730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.184897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.184923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.185091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.185118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.185316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.185363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.185572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.135 [2024-07-26 23:04:32.185599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.135 qpair failed and we were unable to recover it. 00:34:40.135 [2024-07-26 23:04:32.185739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.185766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.185959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.185985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.186172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.186218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.186393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.186420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.186590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.186618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.186816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.186843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.187018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.187045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.187285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.187326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.187515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.187545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.187761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.187791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.188009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.188035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.188287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.188314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.188506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.188536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.188802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.188846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.189017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.189043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.189225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.189252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.189404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.189431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.189661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.189690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.189904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.189933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.190134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.190161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.190341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.190367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.190528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.190554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.190710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.190737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.190879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.190906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.191153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.191180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.191321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.191348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.191534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.191561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.191775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.191801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.191974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.192000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.192196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.192223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.192375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.192403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.192671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.192719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.192929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.192958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.193174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.193201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.193371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.193397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.193609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.193640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.193867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.193909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.194098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.194125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.194294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.194319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.194540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.194569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.194820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.194846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.195042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.195081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.195231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.195257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.195425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.195452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.195620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.195649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.195846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.195874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.196056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.196108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.196245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.196271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.196481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.196509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.196693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.196721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.196897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.196925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.197106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.197133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.197307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.197334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.197517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.197559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.197754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.197783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.197966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.197995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.198233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.198259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.198409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.198435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.198607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.198633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.198849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.198878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.199098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.199141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.199290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.199316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.199547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.199580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.199806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.199835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.200018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.200043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.200227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.200255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.200470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.200514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.200733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.136 [2024-07-26 23:04:32.200759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.136 qpair failed and we were unable to recover it. 00:34:40.136 [2024-07-26 23:04:32.200911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.200938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.201125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.201154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.201372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.201398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.201602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.201631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.201800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.201842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.202065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.202092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.202269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.202298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.202544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.202571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.202794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.202820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.203002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.203028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.203223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.203250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.203400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.203434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.203569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.203595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.203740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.203782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.204000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.204026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.204182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.204209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.204401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.204431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.204650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.204676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.204889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.204918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.205091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.205121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.205350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.205376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.205572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.205603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.205811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.205838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.206009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.206035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.206207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.206234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.206379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.206405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.206579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.206605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.206755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.206782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.207010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.207037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.207268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.207294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.207510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.207537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.207728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.207758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.207950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.207977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.208199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.208228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.208419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.208446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.208666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.208693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.208913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.208942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.209125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.209154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.209329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.209366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.209541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.209567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.209739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.209769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.209982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.210008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.210186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.210212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.210387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.210413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.210610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.210636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.210838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.210864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.211076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.211119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.211301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.211328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.211522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.211549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.211732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.211759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.212019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.212046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.212238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.212266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.212461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.212490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.212709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.212735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.212929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.212958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.213132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.213162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.213336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.213369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.213528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.213557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.213717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.213760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.213955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.213982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.214160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.214188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.214373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.214401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.214598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.214628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.137 [2024-07-26 23:04:32.214814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.137 [2024-07-26 23:04:32.214841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.137 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.215098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.215126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.215337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.215368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.215556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.215584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.215761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.215789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.215997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.216024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.216244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.216271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.216454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.216483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.216674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.216700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.216920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.216949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.217180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.217208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.217375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.217402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.217546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.217572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.217727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.217755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.217950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.217976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.218180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.218210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.218410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.218437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.218579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.218605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.218803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.218830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.218996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.219024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.219229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.219256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.219474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.219503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.219662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.219695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.219864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.219891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.220027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.220053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.220308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.220335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.220548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.220578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.220767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.220795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.221019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.221046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.221207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.221233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.221417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.221447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.221622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.221649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.221843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.221870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.222070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.222110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.222293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.222322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.222518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.222545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.222712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.222739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.222976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.223005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.223169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.223196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.223361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.223389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.223608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.223635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.223828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.223856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.224045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.224081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.224264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.224294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.224519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.224547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.224763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.224793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.225002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.225032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.225261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.225288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.225496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.225524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.225714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.225741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.225892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.225919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.226135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.226165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.226374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.226402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.226548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.226579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.226775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.226802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.226939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.226966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.227150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.227177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.227319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.227363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.227602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.227629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.227821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.227849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.228040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.228077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.228259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.228288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.228472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.228499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.228654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.228681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.228858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.228886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.229024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.229049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.229217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.229243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.229439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.229466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.229607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.229634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.229817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.229847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.230071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.230108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.230282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.138 [2024-07-26 23:04:32.230308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.138 qpair failed and we were unable to recover it. 00:34:40.138 [2024-07-26 23:04:32.230472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.230499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.230701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.230731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.230948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.230974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.231190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.231220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.231408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.231438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.231627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.231653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.231825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.231853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.232000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.232026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.232209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.232235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.232448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.232477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.232665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.232694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.232874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.232902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.233121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.233151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.233324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.233353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.233578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.233604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.233779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.233806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.233975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.234002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.234180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.234208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.234403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.234433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.234649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.234679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.234859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.234886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.235069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.235108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.235276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.235305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.235550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.235577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.235769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.235798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.235988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.236018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.236203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.236230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.236413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.236440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.236666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.236693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.236856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.236884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.237075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.237113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.237304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.237343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.237537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.237564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.237779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.237809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.238027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.238057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.238279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.238305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.238463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.238490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.238714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.238743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.238967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.238994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.239185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.239215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.239441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.239471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.239676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.239703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.239866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.239895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.240087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.240122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.240318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.240351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.240526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.240553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.240718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.240747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.240913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.240940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.241116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.241143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.241335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.241370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.241596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.241623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.241813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.241843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.242043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.242083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.242282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.242308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.242449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.242476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.242670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.242700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.242886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.242913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.243076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.243125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.243326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.243353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.243495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.243521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.243695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.243722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.243918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.243945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.244131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.244158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.244366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.139 [2024-07-26 23:04:32.244396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.139 qpair failed and we were unable to recover it. 00:34:40.139 [2024-07-26 23:04:32.244631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.244658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.244840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.244867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.245078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.245116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.245315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.245349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.245536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.245563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.245786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.245816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.246007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.246036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.246259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.246286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.246450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.246480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.246672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.246701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.246885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.246911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.247132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.247163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.247352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.247386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.247585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.247612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.247780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.247807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.247998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.248028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.248234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.248261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.248454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.248484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.248668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.248698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.248855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.248882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.249102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.249132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.249348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.249377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.249571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.249599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.249822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.249851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.250034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.250070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.250263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.250290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.250482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.250512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.250691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.250721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.250907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.250934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.251132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.251163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.251375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.251406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.251597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.251624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.251821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.251850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.252015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.252045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.252263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.252290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.252481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.252510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.252690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.252720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.252931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.252961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.253135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.253163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.253334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.253361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.253534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.253560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.253758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.253788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.253956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.253986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.254173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.254200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.254351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.254378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.254593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.254622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.254820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.254847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.254987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.255014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.255239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.255270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.255502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.255529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.255729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.255759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.255949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.255979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.256167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.256195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.256383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.256413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.256624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.256654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.256844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.256871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.257055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.257092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.257295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.257322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.257458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.257485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.257679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.257708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.257861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.257891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.258111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.258138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.258307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.258336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.258529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.258557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.258738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.258764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.258931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.258958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.259152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.259183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.259367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.259394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.259583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.259613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.140 qpair failed and we were unable to recover it. 00:34:40.140 [2024-07-26 23:04:32.259833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.140 [2024-07-26 23:04:32.259863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.260046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.260081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.260234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.260263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.260463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.260492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.260687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.260714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.260911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.260941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.261136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.261166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.261358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.261385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.261565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.261594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.261789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.261816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.262020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.262047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.262247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.262284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.262501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.262531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.262700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.262726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.262944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.262974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.263190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.263220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.263392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.263420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.263568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.263612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.263794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.263824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.264044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.264077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.264273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.264303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.264487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.264517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.264718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.264745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.264920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.264947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.265103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.265134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.265353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.265381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.265567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.265596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.265783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.265813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.265966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.265993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.266213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.266243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.266442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.266472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.266663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.266690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.266849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.266878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.267054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.267098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.267289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.267317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.267482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.267512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.267725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.267756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.267950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.267977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.268194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.268229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.268424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.268455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.268649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.268676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.268895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.268925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.269117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.269145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.269319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.269347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.269555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.269598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.269822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.269851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.270019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.270046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.270275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.270305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.270494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.270524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.270711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.270738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.270955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.270982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.271147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.271175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.271330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.271357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.271527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.271557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.271756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.271785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.271957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.271984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.272212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.272243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.272458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.272488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.272702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.272729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.272904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.272931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.273150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.273180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.273377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.273404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.273578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.273604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.273829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.273858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.274065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.274092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.274270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.274301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.274506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.274536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.274732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.274760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.274982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.275013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.275209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.275239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.275436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.141 [2024-07-26 23:04:32.275463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.141 qpair failed and we were unable to recover it. 00:34:40.141 [2024-07-26 23:04:32.275632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.275662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.275851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.275881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.276101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.276129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.276330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.276360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.276546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.276577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.276743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.276771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.276984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.277014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.277213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.277244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.277472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.277499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.277698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.277727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.277914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.277944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.278156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.278183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.278359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.278386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.278569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.278599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.278818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.278845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.279034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.279070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.279291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.279320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.279545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.279572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.279748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.279775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.279948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.279975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.280152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.280179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.280394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.280424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.280661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.280688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.280864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.280891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.281032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.281065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.281241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.281268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.281451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.281478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.281650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.281676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.281886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.281929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.282125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.282153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.282363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.282390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.282563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.282590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.282767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.282794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.282988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.283017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.283213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.283243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.283429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.283459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.283676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.283706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.283894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.283924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.284140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.284168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.284360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.284390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.284553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.284584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.284780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.284807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.284980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.285007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.285227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.285257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.285487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.285514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.285690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.285717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.285909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.285938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.286106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.286134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.286289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.286319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.286506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.286536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.286699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.286726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.286924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.286951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.287154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.287184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.287400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.287427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.287613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.287643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.287835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.287862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.288057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.288092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.288285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.288314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.288509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.288539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.288697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.288724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.288897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.288924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.289087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.289118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.142 qpair failed and we were unable to recover it. 00:34:40.142 [2024-07-26 23:04:32.289332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.142 [2024-07-26 23:04:32.289363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.289551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.289581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.289800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.289828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.290000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.290027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.290202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.290229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.290430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.290457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.290607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.290634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.290823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.290853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.291047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.291098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.291280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.291307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.291460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.291487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.291661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.291688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.291888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.291915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.292076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.292107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.292291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.292321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.292543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.292570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.292762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.292792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.292947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.292977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.293154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.293180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.293351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.293378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.293541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.293567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.293705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.293732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.293928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.293956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.294184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.294214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.294432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.294459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.294616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.294646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.294836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.294863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.295024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.295054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.295210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.295238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.295408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.295435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.295576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.295603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.295789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.295819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.296042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.296078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.296243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.296270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.296487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.296517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.296695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.296722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.296894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.296921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.297117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.297147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.297303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.297333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.297564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.297592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.297783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.297813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.298020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.298050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.298224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.298251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.298401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.298428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.298626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.298652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.298841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.298867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.299071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.299098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.299303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.299333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.299517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.299543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.299755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.299784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.299982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.300009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.300161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.300188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.300364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.300390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.300617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.300647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.300869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.300896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.301085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.301116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.301316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.301343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.301514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.301541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.301725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.301754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.301942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.301971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.302167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.302195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.302390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.302420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.302587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.302617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.302804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.302831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.303021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.303051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.303245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.303275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.303443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.303470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.303653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.303683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.303888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.303916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.304111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.304138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.304329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.304359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.304567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.304594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.304795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.143 [2024-07-26 23:04:32.304822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.143 qpair failed and we were unable to recover it. 00:34:40.143 [2024-07-26 23:04:32.304983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.305013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.305195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.305225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.305379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.305406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.305590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.305620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.305810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.305841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.306048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.306084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.306271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.306301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.306468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.306498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.306692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.306719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.306925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.306952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.307129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.307168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.307367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.307404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.307572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.307599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.307770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.307796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.307996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.308023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.308199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.308229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.308420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.308450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.308639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.308666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.308811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.308840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.309050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.309085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.309263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.309290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.309482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.309511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.309699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.309733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.309931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.309958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.310154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.310181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.310382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.310412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.310629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.310656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.310846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.310876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.311067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.311096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.311292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.311316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.311489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.311513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.311657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.311681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.311851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.311874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.312099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.312127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.312318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.312345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.312505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.312529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.312730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.312757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.312945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.312972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.313170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.313195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.313358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.313385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.313566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.313593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.313816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.313840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.314082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.314110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.314275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.314303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.314489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.314515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.314689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.314714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.314931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.314959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.315151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.315177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.315374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.315402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.315604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.315636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.315794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.315819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.316007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.316035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.316233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.316261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.316438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.316464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.316680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.316707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.316903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.316929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.317102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.317128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.317313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.317341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.317529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.317555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.317746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.317773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.317992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.318021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.144 qpair failed and we were unable to recover it. 00:34:40.144 [2024-07-26 23:04:32.318194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.144 [2024-07-26 23:04:32.318224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.318443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.318470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.318696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.318725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.318913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.318942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.319141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.319168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.319365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.319395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.319596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.319623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.319786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.319813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.320034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.320071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.320275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.320305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.320497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.320524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.320723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.320750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.320961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.320990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.321183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.321211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.321381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.321408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.321549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.321579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.321755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.321782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.322005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.322034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.322246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.322274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.322451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.322490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.322697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.322726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.322890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.322919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.323138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.323165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.323336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.323365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.323563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.323593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.323920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.323986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.324196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.324226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.324419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.324446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.324638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.324672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.324844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.324874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.325040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.325081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.325278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.325305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.325526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.325556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.325788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.325815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.325954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.325980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.326120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.326146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.326372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.326402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.326588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.326618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.326792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.326821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.327001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.327029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.327209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.327236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.327390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.327433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.327622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.327653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.327879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.327906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.328081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.328111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.328303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.328330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.328552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.328581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.328778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.328805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.328945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.328990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.329194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.329222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.329355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.329382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.329581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.329608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.329800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.329829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.329999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.330026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.330175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.330217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.330409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.330436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.330621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.330655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.330811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.330840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.331057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.331110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.331312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.331340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.331554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.331584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.331923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.331976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.332166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.332198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.332373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.332400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.332569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.332596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.145 qpair failed and we were unable to recover it. 00:34:40.145 [2024-07-26 23:04:32.332791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.145 [2024-07-26 23:04:32.332821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.332984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.333014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.333192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.333219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.333389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.333433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.333648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.333677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.333838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.333867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.334092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.334119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.334314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.334344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.334720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.334780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.335539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.335573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.335801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.335828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.335979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.336006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.336202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.336233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.336457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.336488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.336683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.336709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.336920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.336950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.337173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.337204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.337366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.337395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.337615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.337653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.337819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.337848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.338044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.338081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.338272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.338302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.338468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.338495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.338670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.338698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.338986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.339038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.339249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.339279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.339434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.339460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.339632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.339661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.339851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.339881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.340040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.340080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.340253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.340282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.340470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.340499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.341524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.341562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.341768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.341796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.341972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.341998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.342168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.342194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.342377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.342421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.342610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.342638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.342833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.342860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.343076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.343107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.343311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.343336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.343557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.343586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.343767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.343796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.343984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.344012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.344193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.344219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.344359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.344389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.344609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.344638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.344816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.344845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.345000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.345029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.345262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.345288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.345478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.345507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.345688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.345716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.345909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.345935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.346134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.346160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.346304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.346330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.346473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.346500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.346674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.346699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.346903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.346931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.347087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.347129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.347276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.347301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.347494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.347522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.347695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.347723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.347940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.347968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.348143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.348169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.348315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.348341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.348562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.146 [2024-07-26 23:04:32.348590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.146 qpair failed and we were unable to recover it. 00:34:40.146 [2024-07-26 23:04:32.348788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.348831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.349055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.349086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.349252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.349277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.349449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.349474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.349676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.349704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.349875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.349902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.350078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.350115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.350290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.350315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.350539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.350583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.350787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.350815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.350973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.351002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.351173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.351198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.351370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.351396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.351594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.351622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.351811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.351840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.352026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.352051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.352214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.352240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.352485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.352527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.352713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.352741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.352931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.352959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.353170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.353196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.353365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.353390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.353573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.353601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.353813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.353841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.354037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.354085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.354273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.354300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.354465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.354493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.354722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.354750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.354944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.354973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.355153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.355179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.355342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.355368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.355537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.355561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.355783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.355812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.356000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.356028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.356259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.356285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.356490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.356515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.356739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.356767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.356937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.356965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.357158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.357184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.357330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.357355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.357525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.357554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.357774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.357802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.357958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.357986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.358186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.358213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.358406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.358435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.358647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.358675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.358886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.358914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.359079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.359125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.359279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.359304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.359517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.359545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.359697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.359724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.359904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.359932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.360130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.360156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.360321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.360346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.360502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.360529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.360738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.360766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.360981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.361008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.361228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.361253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.361439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.361467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.361628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.361669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.361861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.361889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.362084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.362125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.362266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.362292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.362478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.362507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.147 [2024-07-26 23:04:32.362698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.147 [2024-07-26 23:04:32.362727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.147 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.362882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.362910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.363075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.363116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.363280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.363306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.363523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.363551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.363753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.363794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.363966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.363990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.364158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.364184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.364361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.364402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.364564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.364591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.364782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.364814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.365079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.365105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.365288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.365314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.365527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.365555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.365751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.365779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.366065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.366111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.366261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.366287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.366459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.366484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.366630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.366670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.366886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.366916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.367094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.367137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.367307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.367332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.367560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.367588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.367872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.367928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.368137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.368164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.368349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.368377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.368567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.368609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.368844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.368872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.369053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.369105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.369247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.369273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.369497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.369525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.369704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.369731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.369946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.369975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.370180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.370206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.370380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.370405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.370575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.370604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.370823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.370852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.370999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.371027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.371242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.371269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.371408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.371433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.371602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.371630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.371807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.371835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.372028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.372053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.372266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.372291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.372522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.372551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.372741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.372769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.372931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.372959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.373159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.373184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.373424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.373453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.373649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.373677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.373953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.374004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.374184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.374211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.374382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.374410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.374578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.374606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.374797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.374854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.375036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.375072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.375240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.375265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.375469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.375497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.375660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.375688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.375896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.375923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.376130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.376157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.376325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.376368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.376585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.376612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.376798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.376825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.377008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.377037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.377234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.377260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.377475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.377502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.377694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.377723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.377911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.377940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.148 qpair failed and we were unable to recover it. 00:34:40.148 [2024-07-26 23:04:32.378135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.148 [2024-07-26 23:04:32.378162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.378332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.378358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.378532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.378557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.378722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.378750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.378965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.378993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.379194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.379220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.379401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.379427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.379617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.379645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.379846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.379874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.380085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.380117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.380309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.380342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.380506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.380534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.380717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.380745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.380931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.380960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.381150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.381176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.381325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.381349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.381539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.381567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.381728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.381756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.381942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.381970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.382130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.382156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.382356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.382382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.382527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.382552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.382693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.382718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.382890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.382915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.383078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.383109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.383301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.383329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.383483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.383513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.383706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.383731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.383891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.383920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.384097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.384126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.384315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.384348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.384544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.384569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.384752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.384777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.384941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.384966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.385148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.385173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.385333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.385359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.385530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.385560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.385707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.385732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.385896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.385924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.386086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.386112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.386257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.386282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.386469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.386496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.386681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.386709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.386859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.386884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.387073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.387123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.387272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.387299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.387467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.387493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.387664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.387689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.387830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.387855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.388071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.388108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.388303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.388331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.388546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.388571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.388712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.388738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.388927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.388955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.389179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.389204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.389388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.389413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.389636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.389664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.389859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.389886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.390103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.390132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.390344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.390369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.390595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.390624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.390775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.390802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.391005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.391030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.391229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.391259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.391450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.391479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.391679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.391704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.391897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.391925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.392115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.392141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.392320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.392350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.392546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.149 [2024-07-26 23:04:32.392574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.149 qpair failed and we were unable to recover it. 00:34:40.149 [2024-07-26 23:04:32.392788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.392816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.392982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.393007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.393177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.393203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.393367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.393395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.393561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.393586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.393750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.393775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.393945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.393971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.394179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.394206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.394348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.394373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.394578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.394603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.394807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.394863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.395049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.395086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.395275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.395300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.395471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.395495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.395689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.395719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.395903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.395930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.396119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.396148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.396333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.396358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.396547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.396575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.396750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.396778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.396923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.396951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.397188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.397214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.397420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.397448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.397633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.397661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.397822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.397850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.398038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.398071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.398249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.398275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.398473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.398501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.398670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.398696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.398898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.398923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.399099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.399139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.399279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.399306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.399481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.399506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.399655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.399679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.399829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.399855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.400020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.400045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.400245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.400273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.400480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.400507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.400701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.400729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.400935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.400965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.401163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.401192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.401409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.401436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.401654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.401680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.401850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.401875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.402082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.402109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.402247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.402272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.402445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.402471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.402635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.402660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.402807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.150 [2024-07-26 23:04:32.402832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.150 qpair failed and we were unable to recover it. 00:34:40.150 [2024-07-26 23:04:32.402976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.403000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.403198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.403224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.403373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.403398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.403568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.403593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.403788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.403813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.403982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.404008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.404183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.404208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.404387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.404411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.404581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.404606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.404751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.404778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.404950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.404974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.405169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.405196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.405334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.405364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.405537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.405562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.405733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.405759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.405902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.405929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.406122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.406148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.406350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.406375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.406545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.406570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.406747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.406772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.406942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.406967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.407173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.407199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.407365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.407390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.407600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.407625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.407789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.407814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.407990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.408016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.408198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.408224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.408366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.408390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.408535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.408562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.408736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.408763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.408934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.408960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.409127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.409153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.409299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.409324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.409524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.409549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.409717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.409742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.409886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.409910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.410057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.410102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.410276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.410302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.410473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.410498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.410671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.410700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.410852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.410878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.411077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.411103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.411302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.411327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.411498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.411523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.411660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.411685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.411824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.411850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.412044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.412076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.412220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.412245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.412442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.412467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.412615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.412641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.412792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.412817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.412969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.412994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.413138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.413164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.413339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.413364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.413541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.413565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.413737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.413761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.413910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.413935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.414130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.414156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.414331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.414357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.414494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.414519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.414667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.414693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.151 qpair failed and we were unable to recover it. 00:34:40.151 [2024-07-26 23:04:32.414862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.151 [2024-07-26 23:04:32.414888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.152 qpair failed and we were unable to recover it. 00:34:40.152 [2024-07-26 23:04:32.415082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.152 [2024-07-26 23:04:32.415109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.152 qpair failed and we were unable to recover it. 00:34:40.152 [2024-07-26 23:04:32.415287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.152 [2024-07-26 23:04:32.415313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.152 qpair failed and we were unable to recover it. 00:34:40.152 [2024-07-26 23:04:32.415449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.152 [2024-07-26 23:04:32.415473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.152 qpair failed and we were unable to recover it. 00:34:40.152 [2024-07-26 23:04:32.415680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.152 [2024-07-26 23:04:32.415706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.152 qpair failed and we were unable to recover it. 00:34:40.152 [2024-07-26 23:04:32.415845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.152 [2024-07-26 23:04:32.415871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.152 qpair failed and we were unable to recover it. 00:34:40.152 [2024-07-26 23:04:32.416027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.152 [2024-07-26 23:04:32.416052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.152 qpair failed and we were unable to recover it. 00:34:40.152 [2024-07-26 23:04:32.416238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.152 [2024-07-26 23:04:32.416263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.152 qpair failed and we were unable to recover it. 00:34:40.152 [2024-07-26 23:04:32.416463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.152 [2024-07-26 23:04:32.416488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.152 qpair failed and we were unable to recover it. 00:34:40.152 [2024-07-26 23:04:32.416626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.152 [2024-07-26 23:04:32.416651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.152 qpair failed and we were unable to recover it. 00:34:40.152 [2024-07-26 23:04:32.416793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.152 [2024-07-26 23:04:32.416819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.152 qpair failed and we were unable to recover it. 00:34:40.152 [2024-07-26 23:04:32.416955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.152 [2024-07-26 23:04:32.416980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.152 qpair failed and we were unable to recover it. 00:34:40.152 [2024-07-26 23:04:32.417133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.152 [2024-07-26 23:04:32.417159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.152 qpair failed and we were unable to recover it. 00:34:40.152 [2024-07-26 23:04:32.417293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.152 [2024-07-26 23:04:32.417318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.152 qpair failed and we were unable to recover it. 00:34:40.152 [2024-07-26 23:04:32.417493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.152 [2024-07-26 23:04:32.417518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.152 qpair failed and we were unable to recover it. 00:34:40.152 [2024-07-26 23:04:32.417701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.152 [2024-07-26 23:04:32.417727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.152 qpair failed and we were unable to recover it. 00:34:40.152 [2024-07-26 23:04:32.417869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.152 [2024-07-26 23:04:32.417894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.152 qpair failed and we were unable to recover it. 00:34:40.152 [2024-07-26 23:04:32.418074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.152 [2024-07-26 23:04:32.418100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.152 qpair failed and we were unable to recover it. 00:34:40.152 [2024-07-26 23:04:32.418270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.152 [2024-07-26 23:04:32.418295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.152 qpair failed and we were unable to recover it. 00:34:40.152 [2024-07-26 23:04:32.418469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.152 [2024-07-26 23:04:32.418494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.152 qpair failed and we were unable to recover it. 00:34:40.152 [2024-07-26 23:04:32.418632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.152 [2024-07-26 23:04:32.418657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.152 qpair failed and we were unable to recover it. 00:34:40.152 [2024-07-26 23:04:32.418807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.152 [2024-07-26 23:04:32.418833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.152 qpair failed and we were unable to recover it. 00:34:40.152 [2024-07-26 23:04:32.419023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.152 [2024-07-26 23:04:32.419048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.152 qpair failed and we were unable to recover it. 00:34:40.152 [2024-07-26 23:04:32.419192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.152 [2024-07-26 23:04:32.419217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.152 qpair failed and we were unable to recover it. 00:34:40.152 [2024-07-26 23:04:32.419365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.152 [2024-07-26 23:04:32.419391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.152 qpair failed and we were unable to recover it. 00:34:40.152 [2024-07-26 23:04:32.419558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.152 [2024-07-26 23:04:32.419584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.152 qpair failed and we were unable to recover it. 00:34:40.152 [2024-07-26 23:04:32.419753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.152 [2024-07-26 23:04:32.419778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.152 qpair failed and we were unable to recover it. 00:34:40.152 [2024-07-26 23:04:32.419953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.152 [2024-07-26 23:04:32.419978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.152 qpair failed and we were unable to recover it. 00:34:40.152 [2024-07-26 23:04:32.420136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.153 [2024-07-26 23:04:32.420162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.153 qpair failed and we were unable to recover it. 00:34:40.153 [2024-07-26 23:04:32.420359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.153 [2024-07-26 23:04:32.420384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.153 qpair failed and we were unable to recover it. 00:34:40.153 [2024-07-26 23:04:32.420555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.153 [2024-07-26 23:04:32.420580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.153 qpair failed and we were unable to recover it. 00:34:40.153 [2024-07-26 23:04:32.420754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.153 [2024-07-26 23:04:32.420779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.153 qpair failed and we were unable to recover it. 00:34:40.153 [2024-07-26 23:04:32.420989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.153 [2024-07-26 23:04:32.421015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.153 qpair failed and we were unable to recover it. 00:34:40.153 [2024-07-26 23:04:32.421174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.153 [2024-07-26 23:04:32.421200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.153 qpair failed and we were unable to recover it. 00:34:40.153 [2024-07-26 23:04:32.421363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.153 [2024-07-26 23:04:32.421388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.153 qpair failed and we were unable to recover it. 00:34:40.153 [2024-07-26 23:04:32.421597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.153 [2024-07-26 23:04:32.421622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.153 qpair failed and we were unable to recover it. 00:34:40.153 [2024-07-26 23:04:32.421756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.153 [2024-07-26 23:04:32.421782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.153 qpair failed and we were unable to recover it. 00:34:40.153 [2024-07-26 23:04:32.421949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.153 [2024-07-26 23:04:32.421973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.153 qpair failed and we were unable to recover it. 00:34:40.153 [2024-07-26 23:04:32.422143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.153 [2024-07-26 23:04:32.422169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.153 qpair failed and we were unable to recover it. 00:34:40.153 [2024-07-26 23:04:32.422321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.153 [2024-07-26 23:04:32.422346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.153 qpair failed and we were unable to recover it. 00:34:40.153 [2024-07-26 23:04:32.422557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.153 [2024-07-26 23:04:32.422582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.153 qpair failed and we were unable to recover it. 00:34:40.153 [2024-07-26 23:04:32.422723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.153 [2024-07-26 23:04:32.422748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.153 qpair failed and we were unable to recover it. 00:34:40.153 [2024-07-26 23:04:32.422916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.153 [2024-07-26 23:04:32.422941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.153 qpair failed and we were unable to recover it. 00:34:40.153 [2024-07-26 23:04:32.423141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.153 [2024-07-26 23:04:32.423167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.153 qpair failed and we were unable to recover it. 00:34:40.153 [2024-07-26 23:04:32.423330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.153 [2024-07-26 23:04:32.423355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.153 qpair failed and we were unable to recover it. 00:34:40.153 [2024-07-26 23:04:32.423525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.153 [2024-07-26 23:04:32.423551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.153 qpair failed and we were unable to recover it. 00:34:40.153 [2024-07-26 23:04:32.423718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.153 [2024-07-26 23:04:32.423747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.153 qpair failed and we were unable to recover it. 00:34:40.153 [2024-07-26 23:04:32.423927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.153 [2024-07-26 23:04:32.423952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.153 qpair failed and we were unable to recover it. 00:34:40.153 [2024-07-26 23:04:32.424136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.153 [2024-07-26 23:04:32.424162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.153 qpair failed and we were unable to recover it. 00:34:40.153 [2024-07-26 23:04:32.424361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.153 [2024-07-26 23:04:32.424386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.153 qpair failed and we were unable to recover it. 00:34:40.153 [2024-07-26 23:04:32.424565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.153 [2024-07-26 23:04:32.424590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.153 qpair failed and we were unable to recover it. 00:34:40.153 [2024-07-26 23:04:32.424723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.153 [2024-07-26 23:04:32.424748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.153 qpair failed and we were unable to recover it. 00:34:40.153 [2024-07-26 23:04:32.424958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.154 [2024-07-26 23:04:32.424983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.154 qpair failed and we were unable to recover it. 00:34:40.154 [2024-07-26 23:04:32.425131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.154 [2024-07-26 23:04:32.425156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.154 qpair failed and we were unable to recover it. 00:34:40.154 [2024-07-26 23:04:32.425325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.154 [2024-07-26 23:04:32.425351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.154 qpair failed and we were unable to recover it. 00:34:40.154 [2024-07-26 23:04:32.425519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.154 [2024-07-26 23:04:32.425544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.154 qpair failed and we were unable to recover it. 00:34:40.154 [2024-07-26 23:04:32.425692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.154 [2024-07-26 23:04:32.425718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.154 qpair failed and we were unable to recover it. 00:34:40.154 [2024-07-26 23:04:32.425857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.154 [2024-07-26 23:04:32.425883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.154 qpair failed and we were unable to recover it. 00:34:40.154 [2024-07-26 23:04:32.426055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.154 [2024-07-26 23:04:32.426119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.154 qpair failed and we were unable to recover it. 00:34:40.154 [2024-07-26 23:04:32.426334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.154 [2024-07-26 23:04:32.426360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.154 qpair failed and we were unable to recover it. 00:34:40.154 [2024-07-26 23:04:32.426537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.154 [2024-07-26 23:04:32.426562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.154 qpair failed and we were unable to recover it. 00:34:40.154 [2024-07-26 23:04:32.426730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.154 [2024-07-26 23:04:32.426756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.154 qpair failed and we were unable to recover it. 00:34:40.154 [2024-07-26 23:04:32.426926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.154 [2024-07-26 23:04:32.426951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.154 qpair failed and we were unable to recover it. 00:34:40.154 [2024-07-26 23:04:32.427122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.154 [2024-07-26 23:04:32.427148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.154 qpair failed and we were unable to recover it. 00:34:40.154 [2024-07-26 23:04:32.427290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.154 [2024-07-26 23:04:32.427316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.154 qpair failed and we were unable to recover it. 00:34:40.154 [2024-07-26 23:04:32.427485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.154 [2024-07-26 23:04:32.427510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.154 qpair failed and we were unable to recover it. 00:34:40.154 [2024-07-26 23:04:32.427652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.154 [2024-07-26 23:04:32.427677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.154 qpair failed and we were unable to recover it. 00:34:40.154 [2024-07-26 23:04:32.427823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.154 [2024-07-26 23:04:32.427848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.154 qpair failed and we were unable to recover it. 00:34:40.154 [2024-07-26 23:04:32.428036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.154 [2024-07-26 23:04:32.428070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.154 qpair failed and we were unable to recover it. 00:34:40.154 [2024-07-26 23:04:32.428271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.154 [2024-07-26 23:04:32.428296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.154 qpair failed and we were unable to recover it. 00:34:40.154 [2024-07-26 23:04:32.428439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.154 [2024-07-26 23:04:32.428464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.154 qpair failed and we were unable to recover it. 00:34:40.154 [2024-07-26 23:04:32.428634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.155 [2024-07-26 23:04:32.428659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.155 qpair failed and we were unable to recover it. 00:34:40.155 [2024-07-26 23:04:32.428828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.155 [2024-07-26 23:04:32.428853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.155 qpair failed and we were unable to recover it. 00:34:40.155 [2024-07-26 23:04:32.429021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.155 [2024-07-26 23:04:32.429050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.155 qpair failed and we were unable to recover it. 00:34:40.155 [2024-07-26 23:04:32.429258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.155 [2024-07-26 23:04:32.429283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.155 qpair failed and we were unable to recover it. 00:34:40.155 [2024-07-26 23:04:32.429451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.155 [2024-07-26 23:04:32.429476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.155 qpair failed and we were unable to recover it. 00:34:40.155 [2024-07-26 23:04:32.429652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.155 [2024-07-26 23:04:32.429678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.155 qpair failed and we were unable to recover it. 00:34:40.155 [2024-07-26 23:04:32.429848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.155 [2024-07-26 23:04:32.429873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.155 qpair failed and we were unable to recover it. 00:34:40.155 [2024-07-26 23:04:32.430044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.155 [2024-07-26 23:04:32.430078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.155 qpair failed and we were unable to recover it. 00:34:40.155 [2024-07-26 23:04:32.430223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.155 [2024-07-26 23:04:32.430248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.155 qpair failed and we were unable to recover it. 00:34:40.155 [2024-07-26 23:04:32.430419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.155 [2024-07-26 23:04:32.430444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.155 qpair failed and we were unable to recover it. 00:34:40.155 [2024-07-26 23:04:32.430614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.155 [2024-07-26 23:04:32.430640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.155 qpair failed and we were unable to recover it. 00:34:40.155 [2024-07-26 23:04:32.430809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.155 [2024-07-26 23:04:32.430833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.155 qpair failed and we were unable to recover it. 00:34:40.155 [2024-07-26 23:04:32.431033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.155 [2024-07-26 23:04:32.431067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.155 qpair failed and we were unable to recover it. 00:34:40.155 [2024-07-26 23:04:32.431206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.155 [2024-07-26 23:04:32.431232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.155 qpair failed and we were unable to recover it. 00:34:40.155 [2024-07-26 23:04:32.431366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.155 [2024-07-26 23:04:32.431391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.155 qpair failed and we were unable to recover it. 00:34:40.155 [2024-07-26 23:04:32.431560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.155 [2024-07-26 23:04:32.431585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.155 qpair failed and we were unable to recover it. 00:34:40.155 [2024-07-26 23:04:32.431735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.155 [2024-07-26 23:04:32.431761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.155 qpair failed and we were unable to recover it. 00:34:40.155 [2024-07-26 23:04:32.431936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.155 [2024-07-26 23:04:32.431963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.155 qpair failed and we were unable to recover it. 00:34:40.155 [2024-07-26 23:04:32.432115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.155 [2024-07-26 23:04:32.432142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.155 qpair failed and we were unable to recover it. 00:34:40.155 [2024-07-26 23:04:32.432335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.155 [2024-07-26 23:04:32.432361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.155 qpair failed and we were unable to recover it. 00:34:40.155 [2024-07-26 23:04:32.432529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.155 [2024-07-26 23:04:32.432554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.155 qpair failed and we were unable to recover it. 00:34:40.155 [2024-07-26 23:04:32.432699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.155 [2024-07-26 23:04:32.432724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.155 qpair failed and we were unable to recover it. 00:34:40.155 [2024-07-26 23:04:32.432892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.155 [2024-07-26 23:04:32.432918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.155 qpair failed and we were unable to recover it. 00:34:40.155 [2024-07-26 23:04:32.433069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.155 [2024-07-26 23:04:32.433094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.155 qpair failed and we were unable to recover it. 00:34:40.155 [2024-07-26 23:04:32.433232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.155 [2024-07-26 23:04:32.433257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.155 qpair failed and we were unable to recover it. 00:34:40.155 [2024-07-26 23:04:32.433436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.155 [2024-07-26 23:04:32.433462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.155 qpair failed and we were unable to recover it. 00:34:40.155 [2024-07-26 23:04:32.433628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.156 [2024-07-26 23:04:32.433654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.156 qpair failed and we were unable to recover it. 00:34:40.156 [2024-07-26 23:04:32.433854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.156 [2024-07-26 23:04:32.433880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.156 qpair failed and we were unable to recover it. 00:34:40.156 [2024-07-26 23:04:32.434019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.156 [2024-07-26 23:04:32.434043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.156 qpair failed and we were unable to recover it. 00:34:40.156 [2024-07-26 23:04:32.434204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.156 [2024-07-26 23:04:32.434234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.156 qpair failed and we were unable to recover it. 00:34:40.156 [2024-07-26 23:04:32.434406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.156 [2024-07-26 23:04:32.434431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.156 qpair failed and we were unable to recover it. 00:34:40.156 [2024-07-26 23:04:32.434591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.156 [2024-07-26 23:04:32.434616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.156 qpair failed and we were unable to recover it. 00:34:40.156 [2024-07-26 23:04:32.434756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.156 [2024-07-26 23:04:32.434781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.156 qpair failed and we were unable to recover it. 00:34:40.156 [2024-07-26 23:04:32.434957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.156 [2024-07-26 23:04:32.434983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.156 qpair failed and we were unable to recover it. 00:34:40.156 [2024-07-26 23:04:32.435154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.156 [2024-07-26 23:04:32.435191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.156 qpair failed and we were unable to recover it. 00:34:40.156 [2024-07-26 23:04:32.435333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.156 [2024-07-26 23:04:32.435358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.156 qpair failed and we were unable to recover it. 00:34:40.156 [2024-07-26 23:04:32.435532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.156 [2024-07-26 23:04:32.435558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.156 qpair failed and we were unable to recover it. 00:34:40.156 [2024-07-26 23:04:32.435708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.156 [2024-07-26 23:04:32.435733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.156 qpair failed and we were unable to recover it. 00:34:40.156 [2024-07-26 23:04:32.435903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.156 [2024-07-26 23:04:32.435929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.156 qpair failed and we were unable to recover it. 00:34:40.156 [2024-07-26 23:04:32.436077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.156 [2024-07-26 23:04:32.436104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.156 qpair failed and we were unable to recover it. 00:34:40.156 [2024-07-26 23:04:32.436246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.156 [2024-07-26 23:04:32.436272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.156 qpair failed and we were unable to recover it. 00:34:40.156 [2024-07-26 23:04:32.436429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.156 [2024-07-26 23:04:32.436455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.156 qpair failed and we were unable to recover it. 00:34:40.156 [2024-07-26 23:04:32.436623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.156 [2024-07-26 23:04:32.436648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.156 qpair failed and we were unable to recover it. 00:34:40.156 [2024-07-26 23:04:32.436817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.156 [2024-07-26 23:04:32.436842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.156 qpair failed and we were unable to recover it. 00:34:40.156 [2024-07-26 23:04:32.437012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.156 [2024-07-26 23:04:32.437040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.156 qpair failed and we were unable to recover it. 00:34:40.156 [2024-07-26 23:04:32.437238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.156 [2024-07-26 23:04:32.437264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.156 qpair failed and we were unable to recover it. 00:34:40.156 [2024-07-26 23:04:32.437416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.156 [2024-07-26 23:04:32.437442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.156 qpair failed and we were unable to recover it. 00:34:40.156 [2024-07-26 23:04:32.437593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.156 [2024-07-26 23:04:32.437619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.156 qpair failed and we were unable to recover it. 00:34:40.156 [2024-07-26 23:04:32.437791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.156 [2024-07-26 23:04:32.437816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.156 qpair failed and we were unable to recover it. 00:34:40.156 [2024-07-26 23:04:32.437989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.156 [2024-07-26 23:04:32.438015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.156 qpair failed and we were unable to recover it. 00:34:40.156 [2024-07-26 23:04:32.438184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.156 [2024-07-26 23:04:32.438210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.156 qpair failed and we were unable to recover it. 00:34:40.156 [2024-07-26 23:04:32.438374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.156 [2024-07-26 23:04:32.438399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.156 qpair failed and we were unable to recover it. 00:34:40.156 [2024-07-26 23:04:32.438566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.156 [2024-07-26 23:04:32.438591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.157 qpair failed and we were unable to recover it. 00:34:40.157 [2024-07-26 23:04:32.438733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.157 [2024-07-26 23:04:32.438758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.157 qpair failed and we were unable to recover it. 00:34:40.157 [2024-07-26 23:04:32.438921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.157 [2024-07-26 23:04:32.438946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.157 qpair failed and we were unable to recover it. 00:34:40.157 [2024-07-26 23:04:32.439123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.157 [2024-07-26 23:04:32.439149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.157 qpair failed and we were unable to recover it. 00:34:40.157 [2024-07-26 23:04:32.439286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.157 [2024-07-26 23:04:32.439311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.157 qpair failed and we were unable to recover it. 00:34:40.157 [2024-07-26 23:04:32.439515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.157 [2024-07-26 23:04:32.439541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.157 qpair failed and we were unable to recover it. 00:34:40.157 [2024-07-26 23:04:32.439737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.157 [2024-07-26 23:04:32.439762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.157 qpair failed and we were unable to recover it. 00:34:40.157 [2024-07-26 23:04:32.439910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.157 [2024-07-26 23:04:32.439935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.157 qpair failed and we were unable to recover it. 00:34:40.157 [2024-07-26 23:04:32.440097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.157 [2024-07-26 23:04:32.440123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.157 qpair failed and we were unable to recover it. 00:34:40.157 [2024-07-26 23:04:32.440296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.157 [2024-07-26 23:04:32.440321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.157 qpair failed and we were unable to recover it. 00:34:40.157 [2024-07-26 23:04:32.440498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.157 [2024-07-26 23:04:32.440523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.157 qpair failed and we were unable to recover it. 00:34:40.157 [2024-07-26 23:04:32.440660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.157 [2024-07-26 23:04:32.440685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.157 qpair failed and we were unable to recover it. 00:34:40.157 [2024-07-26 23:04:32.440860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.157 [2024-07-26 23:04:32.440885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.157 qpair failed and we were unable to recover it. 00:34:40.157 [2024-07-26 23:04:32.441067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.157 [2024-07-26 23:04:32.441093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.157 qpair failed and we were unable to recover it. 00:34:40.157 [2024-07-26 23:04:32.441232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.157 [2024-07-26 23:04:32.441257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.157 qpair failed and we were unable to recover it. 00:34:40.157 [2024-07-26 23:04:32.441424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.157 [2024-07-26 23:04:32.441449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.157 qpair failed and we were unable to recover it. 00:34:40.157 [2024-07-26 23:04:32.441614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.157 [2024-07-26 23:04:32.441639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.157 qpair failed and we were unable to recover it. 00:34:40.157 [2024-07-26 23:04:32.441811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.157 [2024-07-26 23:04:32.441837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.157 qpair failed and we were unable to recover it. 00:34:40.157 [2024-07-26 23:04:32.442014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.157 [2024-07-26 23:04:32.442040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.157 qpair failed and we were unable to recover it. 00:34:40.157 [2024-07-26 23:04:32.442202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.157 [2024-07-26 23:04:32.442228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.157 qpair failed and we were unable to recover it. 00:34:40.157 [2024-07-26 23:04:32.442395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.157 [2024-07-26 23:04:32.442420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.157 qpair failed and we were unable to recover it. 00:34:40.157 [2024-07-26 23:04:32.442594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.157 [2024-07-26 23:04:32.442620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.157 qpair failed and we were unable to recover it. 00:34:40.157 [2024-07-26 23:04:32.442758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.157 [2024-07-26 23:04:32.442783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.157 qpair failed and we were unable to recover it. 00:34:40.157 [2024-07-26 23:04:32.442942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.157 [2024-07-26 23:04:32.442967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.157 qpair failed and we were unable to recover it. 00:34:40.157 [2024-07-26 23:04:32.443139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.157 [2024-07-26 23:04:32.443164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.157 qpair failed and we were unable to recover it. 00:34:40.157 [2024-07-26 23:04:32.443338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.157 [2024-07-26 23:04:32.443363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.157 qpair failed and we were unable to recover it. 00:34:40.157 [2024-07-26 23:04:32.443513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.157 [2024-07-26 23:04:32.443538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.157 qpair failed and we were unable to recover it. 00:34:40.157 [2024-07-26 23:04:32.443718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.157 [2024-07-26 23:04:32.443743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.157 qpair failed and we were unable to recover it. 00:34:40.157 [2024-07-26 23:04:32.443911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.157 [2024-07-26 23:04:32.443936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.157 qpair failed and we were unable to recover it. 00:34:40.157 [2024-07-26 23:04:32.444139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.157 [2024-07-26 23:04:32.444165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.157 qpair failed and we were unable to recover it. 00:34:40.157 [2024-07-26 23:04:32.444362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.157 [2024-07-26 23:04:32.444388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.157 qpair failed and we were unable to recover it. 00:34:40.157 [2024-07-26 23:04:32.444560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.157 [2024-07-26 23:04:32.444586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.157 qpair failed and we were unable to recover it. 00:34:40.157 [2024-07-26 23:04:32.444760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.157 [2024-07-26 23:04:32.444785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.157 qpair failed and we were unable to recover it. 00:34:40.157 [2024-07-26 23:04:32.444982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.157 [2024-07-26 23:04:32.445007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.157 qpair failed and we were unable to recover it. 00:34:40.157 [2024-07-26 23:04:32.445174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.157 [2024-07-26 23:04:32.445199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.157 qpair failed and we were unable to recover it. 00:34:40.157 [2024-07-26 23:04:32.445337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.157 [2024-07-26 23:04:32.445362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.445513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.158 [2024-07-26 23:04:32.445538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.445682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.158 [2024-07-26 23:04:32.445708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.445880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.158 [2024-07-26 23:04:32.445905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.446074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.158 [2024-07-26 23:04:32.446100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.446266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.158 [2024-07-26 23:04:32.446291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.446435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.158 [2024-07-26 23:04:32.446460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.446627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.158 [2024-07-26 23:04:32.446653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.446802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.158 [2024-07-26 23:04:32.446827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.446971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.158 [2024-07-26 23:04:32.446997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.447154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.158 [2024-07-26 23:04:32.447185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.447379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.158 [2024-07-26 23:04:32.447405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.447574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.158 [2024-07-26 23:04:32.447599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.447765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.158 [2024-07-26 23:04:32.447790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.447959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.158 [2024-07-26 23:04:32.447985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.448134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.158 [2024-07-26 23:04:32.448160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.448331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.158 [2024-07-26 23:04:32.448356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.448492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.158 [2024-07-26 23:04:32.448517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.448714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.158 [2024-07-26 23:04:32.448740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.448904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.158 [2024-07-26 23:04:32.448929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.449106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.158 [2024-07-26 23:04:32.449132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.449329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.158 [2024-07-26 23:04:32.449354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.449528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.158 [2024-07-26 23:04:32.449554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.449722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.158 [2024-07-26 23:04:32.449747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.449941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.158 [2024-07-26 23:04:32.449969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.450171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.158 [2024-07-26 23:04:32.450197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.450337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.158 [2024-07-26 23:04:32.450364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.450532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.158 [2024-07-26 23:04:32.450558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.450753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.158 [2024-07-26 23:04:32.450781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.450978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.158 [2024-07-26 23:04:32.451003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.451203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.158 [2024-07-26 23:04:32.451229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.451379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.158 [2024-07-26 23:04:32.451404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.451600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.158 [2024-07-26 23:04:32.451626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.451802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.158 [2024-07-26 23:04:32.451829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.452022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.158 [2024-07-26 23:04:32.452047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.452237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.158 [2024-07-26 23:04:32.452263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.452459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.158 [2024-07-26 23:04:32.452486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.452662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.158 [2024-07-26 23:04:32.452691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.158 qpair failed and we were unable to recover it. 00:34:40.158 [2024-07-26 23:04:32.452891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.452917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.453102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.453128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.453261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.453287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.453475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.453503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.453701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.453726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.453899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.453924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.454125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.454154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.454326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.454354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.454569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.454594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.454820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.454848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.455026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.455051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.455231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.455256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.455430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.455455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.455607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.455632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.455823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.455849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.456004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.456032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.456256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.456281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.456446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.456474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.456664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.456716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.456942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.456969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.457160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.457186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.457390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.457444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.457792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.457849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.458089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.458118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.458310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.458335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.458486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.458511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.458723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.458751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.458923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.458951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.459122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.459148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.459293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.459318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.459522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.459547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.459702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.459743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.459976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.460003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.460190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.460215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.460402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.460430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.460609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.460637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.460814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.460842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.461035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.159 [2024-07-26 23:04:32.461068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.159 qpair failed and we were unable to recover it. 00:34:40.159 [2024-07-26 23:04:32.461240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.160 [2024-07-26 23:04:32.461265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.160 qpair failed and we were unable to recover it. 00:34:40.160 [2024-07-26 23:04:32.461421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.160 [2024-07-26 23:04:32.461462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.160 qpair failed and we were unable to recover it. 00:34:40.160 [2024-07-26 23:04:32.461631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.160 [2024-07-26 23:04:32.461672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.160 qpair failed and we were unable to recover it. 00:34:40.160 [2024-07-26 23:04:32.461890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.160 [2024-07-26 23:04:32.461915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.160 qpair failed and we were unable to recover it. 00:34:40.160 [2024-07-26 23:04:32.462084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.160 [2024-07-26 23:04:32.462113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.160 qpair failed and we were unable to recover it. 00:34:40.160 [2024-07-26 23:04:32.462304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.160 [2024-07-26 23:04:32.462329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.160 qpair failed and we were unable to recover it. 00:34:40.160 [2024-07-26 23:04:32.462518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.160 [2024-07-26 23:04:32.462546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.160 qpair failed and we were unable to recover it. 00:34:40.160 [2024-07-26 23:04:32.462733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.160 [2024-07-26 23:04:32.462757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.160 qpair failed and we were unable to recover it. 00:34:40.160 [2024-07-26 23:04:32.462933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.160 [2024-07-26 23:04:32.462958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.160 qpair failed and we were unable to recover it. 00:34:40.160 [2024-07-26 23:04:32.463155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.160 [2024-07-26 23:04:32.463181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.160 qpair failed and we were unable to recover it. 00:34:40.160 [2024-07-26 23:04:32.463322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.160 [2024-07-26 23:04:32.463347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.160 qpair failed and we were unable to recover it. 00:34:40.160 [2024-07-26 23:04:32.463516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.160 [2024-07-26 23:04:32.463541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.160 qpair failed and we were unable to recover it. 00:34:40.160 [2024-07-26 23:04:32.463735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.160 [2024-07-26 23:04:32.463763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.160 qpair failed and we were unable to recover it. 00:34:40.160 [2024-07-26 23:04:32.463944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.160 [2024-07-26 23:04:32.463972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.160 qpair failed and we were unable to recover it. 00:34:40.160 [2024-07-26 23:04:32.464151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.160 [2024-07-26 23:04:32.464176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.160 qpair failed and we were unable to recover it. 00:34:40.160 [2024-07-26 23:04:32.464340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.160 [2024-07-26 23:04:32.464365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.160 qpair failed and we were unable to recover it. 00:34:40.160 [2024-07-26 23:04:32.464546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.160 [2024-07-26 23:04:32.464572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.160 qpair failed and we were unable to recover it. 00:34:40.160 [2024-07-26 23:04:32.464719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.160 [2024-07-26 23:04:32.464745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.160 qpair failed and we were unable to recover it. 00:34:40.160 [2024-07-26 23:04:32.464913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.160 [2024-07-26 23:04:32.464938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.160 qpair failed and we were unable to recover it. 00:34:40.160 [2024-07-26 23:04:32.465107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.160 [2024-07-26 23:04:32.465132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.160 qpair failed and we were unable to recover it. 00:34:40.160 [2024-07-26 23:04:32.465325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.160 [2024-07-26 23:04:32.465368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.160 qpair failed and we were unable to recover it. 00:34:40.160 [2024-07-26 23:04:32.465587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.160 [2024-07-26 23:04:32.465612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.160 qpair failed and we were unable to recover it. 00:34:40.160 [2024-07-26 23:04:32.465795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.160 [2024-07-26 23:04:32.465823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.160 qpair failed and we were unable to recover it. 00:34:40.160 [2024-07-26 23:04:32.466010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.160 [2024-07-26 23:04:32.466035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.160 qpair failed and we were unable to recover it. 00:34:40.160 [2024-07-26 23:04:32.466234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.160 [2024-07-26 23:04:32.466260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.160 qpair failed and we were unable to recover it. 00:34:40.160 [2024-07-26 23:04:32.466421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.160 [2024-07-26 23:04:32.466448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.160 qpair failed and we were unable to recover it. 00:34:40.160 [2024-07-26 23:04:32.466629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.160 [2024-07-26 23:04:32.466657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.160 qpair failed and we were unable to recover it. 00:34:40.160 [2024-07-26 23:04:32.466820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.160 [2024-07-26 23:04:32.466845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.160 qpair failed and we were unable to recover it. 00:34:40.160 [2024-07-26 23:04:32.466994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.160 [2024-07-26 23:04:32.467019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.160 qpair failed and we were unable to recover it. 00:34:40.160 [2024-07-26 23:04:32.467206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.160 [2024-07-26 23:04:32.467235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.160 qpair failed and we were unable to recover it. 00:34:40.160 [2024-07-26 23:04:32.467399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.160 [2024-07-26 23:04:32.467424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.160 qpair failed and we were unable to recover it. 00:34:40.160 [2024-07-26 23:04:32.467621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.160 [2024-07-26 23:04:32.467647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.160 qpair failed and we were unable to recover it. 00:34:40.160 [2024-07-26 23:04:32.467883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.467911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.468130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.468156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.468359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.468387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.468576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.468601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.468738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.468764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.468899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.468926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.469098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.469124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.469293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.469319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.469535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.469563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.469781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.469806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.469978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.470003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.470156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.470181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.470382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.470411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.470599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.470627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.470785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.470813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.470979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.471006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.471189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.471215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.471411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.471440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.471635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.471662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.471836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.471861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.472013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.472038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.472246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.472271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.472469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.472497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.472682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.472707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.472930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.472963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.473159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.473185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.473333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.473358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.473529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.473555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.473740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.473768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.473981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.474009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.474186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.474210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.474355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.474380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.474521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.474546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.474687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.474712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.474888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.474914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.475055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.475087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.475264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.475290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.475451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.161 [2024-07-26 23:04:32.475476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.161 qpair failed and we were unable to recover it. 00:34:40.161 [2024-07-26 23:04:32.475679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.475707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.162 qpair failed and we were unable to recover it. 00:34:40.162 [2024-07-26 23:04:32.475893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.475919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.162 qpair failed and we were unable to recover it. 00:34:40.162 [2024-07-26 23:04:32.476067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.476092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.162 qpair failed and we were unable to recover it. 00:34:40.162 [2024-07-26 23:04:32.476265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.476290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.162 qpair failed and we were unable to recover it. 00:34:40.162 [2024-07-26 23:04:32.476443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.476468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.162 qpair failed and we were unable to recover it. 00:34:40.162 [2024-07-26 23:04:32.476608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.476633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.162 qpair failed and we were unable to recover it. 00:34:40.162 [2024-07-26 23:04:32.476795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.476824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.162 qpair failed and we were unable to recover it. 00:34:40.162 [2024-07-26 23:04:32.477005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.477033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.162 qpair failed and we were unable to recover it. 00:34:40.162 [2024-07-26 23:04:32.477260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.477285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.162 qpair failed and we were unable to recover it. 00:34:40.162 [2024-07-26 23:04:32.477419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.477445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.162 qpair failed and we were unable to recover it. 00:34:40.162 [2024-07-26 23:04:32.477636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.477665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.162 qpair failed and we were unable to recover it. 00:34:40.162 [2024-07-26 23:04:32.477848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.477876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.162 qpair failed and we were unable to recover it. 00:34:40.162 [2024-07-26 23:04:32.478073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.478121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.162 qpair failed and we were unable to recover it. 00:34:40.162 [2024-07-26 23:04:32.478266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.478295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.162 qpair failed and we were unable to recover it. 00:34:40.162 [2024-07-26 23:04:32.478519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.478547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.162 qpair failed and we were unable to recover it. 00:34:40.162 [2024-07-26 23:04:32.478754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.478782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.162 qpair failed and we were unable to recover it. 00:34:40.162 [2024-07-26 23:04:32.478955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.478983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.162 qpair failed and we were unable to recover it. 00:34:40.162 [2024-07-26 23:04:32.479207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.479233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.162 qpair failed and we were unable to recover it. 00:34:40.162 [2024-07-26 23:04:32.479387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.479413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.162 qpair failed and we were unable to recover it. 00:34:40.162 [2024-07-26 23:04:32.479638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.479665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.162 qpair failed and we were unable to recover it. 00:34:40.162 [2024-07-26 23:04:32.479836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.479864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.162 qpair failed and we were unable to recover it. 00:34:40.162 [2024-07-26 23:04:32.480068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.480094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.162 qpair failed and we were unable to recover it. 00:34:40.162 [2024-07-26 23:04:32.480259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.480287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.162 qpair failed and we were unable to recover it. 00:34:40.162 [2024-07-26 23:04:32.480468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.480496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.162 qpair failed and we were unable to recover it. 00:34:40.162 [2024-07-26 23:04:32.480646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.480674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.162 qpair failed and we were unable to recover it. 00:34:40.162 [2024-07-26 23:04:32.480862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.480887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.162 qpair failed and we were unable to recover it. 00:34:40.162 [2024-07-26 23:04:32.481118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.481147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.162 qpair failed and we were unable to recover it. 00:34:40.162 [2024-07-26 23:04:32.481318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.481346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.162 qpair failed and we were unable to recover it. 00:34:40.162 [2024-07-26 23:04:32.481535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.481562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.162 qpair failed and we were unable to recover it. 00:34:40.162 [2024-07-26 23:04:32.481752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.481778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.162 qpair failed and we were unable to recover it. 00:34:40.162 [2024-07-26 23:04:32.482001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.482029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.162 qpair failed and we were unable to recover it. 00:34:40.162 [2024-07-26 23:04:32.482216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.482242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.162 qpair failed and we were unable to recover it. 00:34:40.162 [2024-07-26 23:04:32.482433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.482461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.162 qpair failed and we were unable to recover it. 00:34:40.162 [2024-07-26 23:04:32.482651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.482675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.162 qpair failed and we were unable to recover it. 00:34:40.162 [2024-07-26 23:04:32.482823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.482848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.162 qpair failed and we were unable to recover it. 00:34:40.162 [2024-07-26 23:04:32.483037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.483074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.162 qpair failed and we were unable to recover it. 00:34:40.162 [2024-07-26 23:04:32.483265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.483290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.162 qpair failed and we were unable to recover it. 00:34:40.162 [2024-07-26 23:04:32.483461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.162 [2024-07-26 23:04:32.483487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.163 [2024-07-26 23:04:32.483663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.163 [2024-07-26 23:04:32.483690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.163 [2024-07-26 23:04:32.483844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.163 [2024-07-26 23:04:32.483870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.163 [2024-07-26 23:04:32.484073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.163 [2024-07-26 23:04:32.484112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.163 [2024-07-26 23:04:32.484284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.163 [2024-07-26 23:04:32.484310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.163 [2024-07-26 23:04:32.484532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.163 [2024-07-26 23:04:32.484560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.163 [2024-07-26 23:04:32.484782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.163 [2024-07-26 23:04:32.484810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.163 [2024-07-26 23:04:32.484973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.163 [2024-07-26 23:04:32.485001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.163 [2024-07-26 23:04:32.485191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.163 [2024-07-26 23:04:32.485217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.163 [2024-07-26 23:04:32.485380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.163 [2024-07-26 23:04:32.485409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.163 [2024-07-26 23:04:32.485608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.163 [2024-07-26 23:04:32.485636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.163 [2024-07-26 23:04:32.485807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.163 [2024-07-26 23:04:32.485835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.163 [2024-07-26 23:04:32.486029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.163 [2024-07-26 23:04:32.486054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.163 [2024-07-26 23:04:32.486245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.163 [2024-07-26 23:04:32.486272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.163 [2024-07-26 23:04:32.486451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.163 [2024-07-26 23:04:32.486479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.163 [2024-07-26 23:04:32.486642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.163 [2024-07-26 23:04:32.486670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.163 [2024-07-26 23:04:32.486897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.163 [2024-07-26 23:04:32.486922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.163 [2024-07-26 23:04:32.487109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.163 [2024-07-26 23:04:32.487135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.163 [2024-07-26 23:04:32.487313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.163 [2024-07-26 23:04:32.487338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.163 [2024-07-26 23:04:32.487533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.163 [2024-07-26 23:04:32.487562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.163 [2024-07-26 23:04:32.487789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.163 [2024-07-26 23:04:32.487814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.163 [2024-07-26 23:04:32.488004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.163 [2024-07-26 23:04:32.488033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.163 [2024-07-26 23:04:32.488232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.163 [2024-07-26 23:04:32.488258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.163 [2024-07-26 23:04:32.488432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.163 [2024-07-26 23:04:32.488457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.163 [2024-07-26 23:04:32.488631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.163 [2024-07-26 23:04:32.488656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.163 [2024-07-26 23:04:32.488851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.163 [2024-07-26 23:04:32.488879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.163 [2024-07-26 23:04:32.489076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.163 [2024-07-26 23:04:32.489107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.163 [2024-07-26 23:04:32.489308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.163 [2024-07-26 23:04:32.489336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.163 [2024-07-26 23:04:32.489502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.163 [2024-07-26 23:04:32.489527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.163 [2024-07-26 23:04:32.489695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.163 [2024-07-26 23:04:32.489720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.163 [2024-07-26 23:04:32.489928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.163 [2024-07-26 23:04:32.489956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.163 [2024-07-26 23:04:32.490151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.163 [2024-07-26 23:04:32.490180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.163 [2024-07-26 23:04:32.490375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.163 [2024-07-26 23:04:32.490400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.163 [2024-07-26 23:04:32.490597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.163 [2024-07-26 23:04:32.490625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.163 [2024-07-26 23:04:32.490773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.163 [2024-07-26 23:04:32.490803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.163 [2024-07-26 23:04:32.490995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.163 [2024-07-26 23:04:32.491020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.163 [2024-07-26 23:04:32.491222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.163 [2024-07-26 23:04:32.491248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.163 [2024-07-26 23:04:32.491473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.163 [2024-07-26 23:04:32.491524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.163 qpair failed and we were unable to recover it. 00:34:40.164 [2024-07-26 23:04:32.491746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.164 [2024-07-26 23:04:32.491774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.164 qpair failed and we were unable to recover it. 00:34:40.164 [2024-07-26 23:04:32.491991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.164 [2024-07-26 23:04:32.492019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.164 qpair failed and we were unable to recover it. 00:34:40.164 [2024-07-26 23:04:32.492219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.164 [2024-07-26 23:04:32.492244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.164 qpair failed and we were unable to recover it. 00:34:40.164 [2024-07-26 23:04:32.492435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.164 [2024-07-26 23:04:32.492464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.164 qpair failed and we were unable to recover it. 00:34:40.164 [2024-07-26 23:04:32.492630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.164 [2024-07-26 23:04:32.492659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.164 qpair failed and we were unable to recover it. 00:34:40.164 [2024-07-26 23:04:32.492844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.164 [2024-07-26 23:04:32.492872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.164 qpair failed and we were unable to recover it. 00:34:40.164 [2024-07-26 23:04:32.493094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.164 [2024-07-26 23:04:32.493124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.164 qpair failed and we were unable to recover it. 00:34:40.164 [2024-07-26 23:04:32.493290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.164 [2024-07-26 23:04:32.493318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.164 qpair failed and we were unable to recover it. 00:34:40.164 [2024-07-26 23:04:32.493515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.164 [2024-07-26 23:04:32.493540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.164 qpair failed and we were unable to recover it. 00:34:40.164 [2024-07-26 23:04:32.493738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.164 [2024-07-26 23:04:32.493766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.164 qpair failed and we were unable to recover it. 00:34:40.164 [2024-07-26 23:04:32.493957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.164 [2024-07-26 23:04:32.493982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.164 qpair failed and we were unable to recover it. 00:34:40.164 [2024-07-26 23:04:32.494159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.164 [2024-07-26 23:04:32.494186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.164 qpair failed and we were unable to recover it. 00:34:40.164 [2024-07-26 23:04:32.494370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.164 [2024-07-26 23:04:32.494399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.164 qpair failed and we were unable to recover it. 00:34:40.164 [2024-07-26 23:04:32.494598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.164 [2024-07-26 23:04:32.494628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.164 qpair failed and we were unable to recover it. 00:34:40.164 [2024-07-26 23:04:32.494819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.164 [2024-07-26 23:04:32.494846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.164 qpair failed and we were unable to recover it. 00:34:40.164 [2024-07-26 23:04:32.494995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.164 [2024-07-26 23:04:32.495021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.164 qpair failed and we were unable to recover it. 00:34:40.164 [2024-07-26 23:04:32.495233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.164 [2024-07-26 23:04:32.495260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.164 qpair failed and we were unable to recover it. 00:34:40.164 [2024-07-26 23:04:32.495424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.164 [2024-07-26 23:04:32.495449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.164 qpair failed and we were unable to recover it. 00:34:40.164 [2024-07-26 23:04:32.495619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.164 [2024-07-26 23:04:32.495644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.164 qpair failed and we were unable to recover it. 00:34:40.164 [2024-07-26 23:04:32.495789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.164 [2024-07-26 23:04:32.495819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.164 qpair failed and we were unable to recover it. 00:34:40.164 [2024-07-26 23:04:32.495975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.164 [2024-07-26 23:04:32.496001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.164 qpair failed and we were unable to recover it. 00:34:40.164 [2024-07-26 23:04:32.496175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.164 [2024-07-26 23:04:32.496202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.164 qpair failed and we were unable to recover it. 00:34:40.164 [2024-07-26 23:04:32.496396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.164 [2024-07-26 23:04:32.496421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.164 qpair failed and we were unable to recover it. 00:34:40.164 [2024-07-26 23:04:32.496589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.164 [2024-07-26 23:04:32.496614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.164 qpair failed and we were unable to recover it. 00:34:40.164 [2024-07-26 23:04:32.496766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.164 [2024-07-26 23:04:32.496792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.164 qpair failed and we were unable to recover it. 00:34:40.164 [2024-07-26 23:04:32.496985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.164 [2024-07-26 23:04:32.497013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.164 qpair failed and we were unable to recover it. 00:34:40.164 [2024-07-26 23:04:32.497207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.164 [2024-07-26 23:04:32.497233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.164 qpair failed and we were unable to recover it. 00:34:40.164 [2024-07-26 23:04:32.497387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.164 [2024-07-26 23:04:32.497412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.164 qpair failed and we were unable to recover it. 00:34:40.164 [2024-07-26 23:04:32.497614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.164 [2024-07-26 23:04:32.497639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.164 qpair failed and we were unable to recover it. 00:34:40.164 [2024-07-26 23:04:32.497835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.164 [2024-07-26 23:04:32.497860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.164 qpair failed and we were unable to recover it. 00:34:40.164 [2024-07-26 23:04:32.498035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.164 [2024-07-26 23:04:32.498070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.164 qpair failed and we were unable to recover it. 00:34:40.164 [2024-07-26 23:04:32.498219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.164 [2024-07-26 23:04:32.498245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.164 qpair failed and we were unable to recover it. 00:34:40.164 [2024-07-26 23:04:32.498394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.164 [2024-07-26 23:04:32.498420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.498595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.165 [2024-07-26 23:04:32.498625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.498826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.165 [2024-07-26 23:04:32.498852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.499024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.165 [2024-07-26 23:04:32.499050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.499227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.165 [2024-07-26 23:04:32.499253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.499458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.165 [2024-07-26 23:04:32.499483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.499683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.165 [2024-07-26 23:04:32.499708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.499854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.165 [2024-07-26 23:04:32.499879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.500091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.165 [2024-07-26 23:04:32.500128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.500275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.165 [2024-07-26 23:04:32.500301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.500500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.165 [2024-07-26 23:04:32.500525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.500694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.165 [2024-07-26 23:04:32.500719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.500891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.165 [2024-07-26 23:04:32.500917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.501057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.165 [2024-07-26 23:04:32.501090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.501261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.165 [2024-07-26 23:04:32.501286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.501487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.165 [2024-07-26 23:04:32.501512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.501720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.165 [2024-07-26 23:04:32.501745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.501910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.165 [2024-07-26 23:04:32.501935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.502102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.165 [2024-07-26 23:04:32.502130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.502301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.165 [2024-07-26 23:04:32.502326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.502476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.165 [2024-07-26 23:04:32.502502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.502676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.165 [2024-07-26 23:04:32.502701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.502837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.165 [2024-07-26 23:04:32.502862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.503035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.165 [2024-07-26 23:04:32.503068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.503216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.165 [2024-07-26 23:04:32.503241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.503406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.165 [2024-07-26 23:04:32.503432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.503630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.165 [2024-07-26 23:04:32.503655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.503789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.165 [2024-07-26 23:04:32.503814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.503990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.165 [2024-07-26 23:04:32.504016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.504187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.165 [2024-07-26 23:04:32.504213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.504382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.165 [2024-07-26 23:04:32.504408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.504604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.165 [2024-07-26 23:04:32.504629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.504778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.165 [2024-07-26 23:04:32.504805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.504970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.165 [2024-07-26 23:04:32.504995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.505173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.165 [2024-07-26 23:04:32.505200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.505373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.165 [2024-07-26 23:04:32.505398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.505542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.165 [2024-07-26 23:04:32.505568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.505715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.165 [2024-07-26 23:04:32.505740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.165 qpair failed and we were unable to recover it. 00:34:40.165 [2024-07-26 23:04:32.505880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.166 [2024-07-26 23:04:32.505905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.166 qpair failed and we were unable to recover it. 00:34:40.166 [2024-07-26 23:04:32.506047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.166 [2024-07-26 23:04:32.506090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.166 qpair failed and we were unable to recover it. 00:34:40.166 [2024-07-26 23:04:32.506267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.166 [2024-07-26 23:04:32.506293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.166 qpair failed and we were unable to recover it. 00:34:40.166 [2024-07-26 23:04:32.506439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.166 [2024-07-26 23:04:32.506464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.166 qpair failed and we were unable to recover it. 00:34:40.166 [2024-07-26 23:04:32.506666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.166 [2024-07-26 23:04:32.506694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.166 qpair failed and we were unable to recover it. 00:34:40.166 [2024-07-26 23:04:32.506907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.166 [2024-07-26 23:04:32.506935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.166 qpair failed and we were unable to recover it. 00:34:40.166 [2024-07-26 23:04:32.507108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.166 [2024-07-26 23:04:32.507134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.166 qpair failed and we were unable to recover it. 00:34:40.166 [2024-07-26 23:04:32.507303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.166 [2024-07-26 23:04:32.507328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.166 qpair failed and we were unable to recover it. 00:34:40.166 [2024-07-26 23:04:32.507511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.166 [2024-07-26 23:04:32.507536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.166 qpair failed and we were unable to recover it. 00:34:40.166 [2024-07-26 23:04:32.507736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.166 [2024-07-26 23:04:32.507761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.166 qpair failed and we were unable to recover it. 00:34:40.166 [2024-07-26 23:04:32.507907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.166 [2024-07-26 23:04:32.507933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.166 qpair failed and we were unable to recover it. 00:34:40.166 [2024-07-26 23:04:32.508104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.166 [2024-07-26 23:04:32.508129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.166 qpair failed and we were unable to recover it. 00:34:40.166 [2024-07-26 23:04:32.508279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.166 [2024-07-26 23:04:32.508304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.166 qpair failed and we were unable to recover it. 00:34:40.166 [2024-07-26 23:04:32.508474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.166 [2024-07-26 23:04:32.508500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.166 qpair failed and we were unable to recover it. 00:34:40.166 [2024-07-26 23:04:32.508643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.166 [2024-07-26 23:04:32.508669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.166 qpair failed and we were unable to recover it. 00:34:40.166 [2024-07-26 23:04:32.508875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.166 [2024-07-26 23:04:32.508900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.166 qpair failed and we were unable to recover it. 00:34:40.166 [2024-07-26 23:04:32.509046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.166 [2024-07-26 23:04:32.509080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.166 qpair failed and we were unable to recover it. 00:34:40.166 [2024-07-26 23:04:32.509251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.166 [2024-07-26 23:04:32.509277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.166 qpair failed and we were unable to recover it. 00:34:40.166 [2024-07-26 23:04:32.509448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.166 [2024-07-26 23:04:32.509475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.166 qpair failed and we were unable to recover it. 00:34:40.166 [2024-07-26 23:04:32.509672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.166 [2024-07-26 23:04:32.509697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.166 qpair failed and we were unable to recover it. 00:34:40.166 [2024-07-26 23:04:32.509866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.166 [2024-07-26 23:04:32.509892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.166 qpair failed and we were unable to recover it. 00:34:40.166 [2024-07-26 23:04:32.510071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.166 [2024-07-26 23:04:32.510097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.166 qpair failed and we were unable to recover it. 00:34:40.166 [2024-07-26 23:04:32.510270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.166 [2024-07-26 23:04:32.510295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.166 qpair failed and we were unable to recover it. 00:34:40.166 [2024-07-26 23:04:32.510490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.166 [2024-07-26 23:04:32.510515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.166 qpair failed and we were unable to recover it. 00:34:40.166 [2024-07-26 23:04:32.510655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.166 [2024-07-26 23:04:32.510680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.166 qpair failed and we were unable to recover it. 00:34:40.166 [2024-07-26 23:04:32.510843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.166 [2024-07-26 23:04:32.510868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.166 qpair failed and we were unable to recover it. 00:34:40.166 [2024-07-26 23:04:32.511039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.166 [2024-07-26 23:04:32.511082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.166 qpair failed and we were unable to recover it. 00:34:40.166 [2024-07-26 23:04:32.511241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.166 [2024-07-26 23:04:32.511267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.166 qpair failed and we were unable to recover it. 00:34:40.166 [2024-07-26 23:04:32.511434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.166 [2024-07-26 23:04:32.511459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.166 qpair failed and we were unable to recover it. 00:34:40.166 [2024-07-26 23:04:32.511649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.166 [2024-07-26 23:04:32.511674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.166 qpair failed and we were unable to recover it. 00:34:40.166 [2024-07-26 23:04:32.511850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.166 [2024-07-26 23:04:32.511876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.166 qpair failed and we were unable to recover it. 00:34:40.166 [2024-07-26 23:04:32.512052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.166 [2024-07-26 23:04:32.512090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.166 qpair failed and we were unable to recover it. 00:34:40.166 [2024-07-26 23:04:32.512283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.166 [2024-07-26 23:04:32.512309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.166 qpair failed and we were unable to recover it. 00:34:40.166 [2024-07-26 23:04:32.512455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.166 [2024-07-26 23:04:32.512480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.166 qpair failed and we were unable to recover it. 00:34:40.166 [2024-07-26 23:04:32.512629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.166 [2024-07-26 23:04:32.512654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.166 qpair failed and we were unable to recover it. 00:34:40.166 [2024-07-26 23:04:32.512799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.166 [2024-07-26 23:04:32.512824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.166 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.512989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.513015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.513180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.513206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.513380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.513406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.513570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.513596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.513774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.513800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.514002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.514027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.514210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.514236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.514431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.514456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.514628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.514653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.514828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.514853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.515031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.515057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.515232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.515258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.515422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.515447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.515618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.515644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.515783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.515809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.515945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.515969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.516151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.516178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.516338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.516363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.516510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.516537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.516706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.516731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.516905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.516931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.517103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.517129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.517305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.517335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.517509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.517534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.517702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.517727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.517904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.517929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.518127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.518152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.518345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.518370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.518568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.518594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.518734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.518758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.518900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.518926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.519094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.519119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.519293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.519319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.519465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.519490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.519631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.519656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.519827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.519852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.520007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.520033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.167 [2024-07-26 23:04:32.520213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.167 [2024-07-26 23:04:32.520238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.167 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.520437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.520463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.520595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.520620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.520757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.520782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.520944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.520972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.521138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.521168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.521370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.521396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.521592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.521617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.521783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.521809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.521951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.521976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.522173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.522200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.522354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.522379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.522561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.522590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.522761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.522786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.522981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.523007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.523159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.523185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.523333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.523358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.523506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.523531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.523695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.523721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.523868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.523894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.524065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.524091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.524264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.524293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.524445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.524471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.524675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.524701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.524850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.524876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.525082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.525111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.525291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.525317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.525459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.525484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.525619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.525645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.525785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.525810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.525986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.526011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.526185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.526211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.526383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.526408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.526561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.526587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.526734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.526760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.526931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.526957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.527162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.527188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.527360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.527385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.168 [2024-07-26 23:04:32.527554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.168 [2024-07-26 23:04:32.527579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.168 qpair failed and we were unable to recover it. 00:34:40.169 [2024-07-26 23:04:32.527776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.169 [2024-07-26 23:04:32.527802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.169 qpair failed and we were unable to recover it. 00:34:40.169 [2024-07-26 23:04:32.527949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.169 [2024-07-26 23:04:32.527975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.169 qpair failed and we were unable to recover it. 00:34:40.169 [2024-07-26 23:04:32.528122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.169 [2024-07-26 23:04:32.528148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.169 qpair failed and we were unable to recover it. 00:34:40.169 [2024-07-26 23:04:32.528323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.169 [2024-07-26 23:04:32.528349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.169 qpair failed and we were unable to recover it. 00:34:40.169 [2024-07-26 23:04:32.528544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.169 [2024-07-26 23:04:32.528569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.169 qpair failed and we were unable to recover it. 00:34:40.169 [2024-07-26 23:04:32.528742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.169 [2024-07-26 23:04:32.528767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.169 qpair failed and we were unable to recover it. 00:34:40.169 [2024-07-26 23:04:32.528938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.169 [2024-07-26 23:04:32.528963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.169 qpair failed and we were unable to recover it. 00:34:40.169 [2024-07-26 23:04:32.529114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.169 [2024-07-26 23:04:32.529140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.169 qpair failed and we were unable to recover it. 00:34:40.169 [2024-07-26 23:04:32.529275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.169 [2024-07-26 23:04:32.529301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.169 qpair failed and we were unable to recover it. 00:34:40.169 [2024-07-26 23:04:32.529470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.169 [2024-07-26 23:04:32.529496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.169 qpair failed and we were unable to recover it. 00:34:40.169 [2024-07-26 23:04:32.529664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.169 [2024-07-26 23:04:32.529689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.169 qpair failed and we were unable to recover it. 00:34:40.169 [2024-07-26 23:04:32.529859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.169 [2024-07-26 23:04:32.529888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.169 qpair failed and we were unable to recover it. 00:34:40.169 [2024-07-26 23:04:32.530078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.169 [2024-07-26 23:04:32.530117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.169 qpair failed and we were unable to recover it. 00:34:40.169 [2024-07-26 23:04:32.530267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.169 [2024-07-26 23:04:32.530294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.169 qpair failed and we were unable to recover it. 00:34:40.169 [2024-07-26 23:04:32.530458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.169 [2024-07-26 23:04:32.530500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.169 qpair failed and we were unable to recover it. 00:34:40.169 [2024-07-26 23:04:32.530686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.169 [2024-07-26 23:04:32.530715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.169 qpair failed and we were unable to recover it. 00:34:40.169 [2024-07-26 23:04:32.530916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.169 [2024-07-26 23:04:32.530946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.169 qpair failed and we were unable to recover it. 00:34:40.169 [2024-07-26 23:04:32.531136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.169 [2024-07-26 23:04:32.531164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.169 qpair failed and we were unable to recover it. 00:34:40.169 [2024-07-26 23:04:32.531306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.169 [2024-07-26 23:04:32.531333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.169 qpair failed and we were unable to recover it. 00:34:40.169 [2024-07-26 23:04:32.531480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.169 [2024-07-26 23:04:32.531523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.169 qpair failed and we were unable to recover it. 00:34:40.169 [2024-07-26 23:04:32.531713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.169 [2024-07-26 23:04:32.531738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.169 qpair failed and we were unable to recover it. 00:34:40.169 [2024-07-26 23:04:32.531878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.169 [2024-07-26 23:04:32.531905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.169 qpair failed and we were unable to recover it. 00:34:40.169 [2024-07-26 23:04:32.532109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.169 [2024-07-26 23:04:32.532137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.169 qpair failed and we were unable to recover it. 00:34:40.169 [2024-07-26 23:04:32.532297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.169 [2024-07-26 23:04:32.532323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.169 qpair failed and we were unable to recover it. 00:34:40.169 [2024-07-26 23:04:32.532525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.169 [2024-07-26 23:04:32.532551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.169 qpair failed and we were unable to recover it. 00:34:40.169 [2024-07-26 23:04:32.532808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.169 [2024-07-26 23:04:32.532837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.169 qpair failed and we were unable to recover it. 00:34:40.169 [2024-07-26 23:04:32.533084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.169 [2024-07-26 23:04:32.533127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.169 qpair failed and we were unable to recover it. 00:34:40.169 [2024-07-26 23:04:32.533305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.169 [2024-07-26 23:04:32.533337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.169 qpair failed and we were unable to recover it. 00:34:40.169 [2024-07-26 23:04:32.533537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.169 [2024-07-26 23:04:32.533564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.169 qpair failed and we were unable to recover it. 00:34:40.169 [2024-07-26 23:04:32.533739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.169 [2024-07-26 23:04:32.533765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.169 qpair failed and we were unable to recover it. 00:34:40.169 [2024-07-26 23:04:32.533943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.169 [2024-07-26 23:04:32.533969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.169 qpair failed and we were unable to recover it. 00:34:40.169 [2024-07-26 23:04:32.534153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.169 [2024-07-26 23:04:32.534183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.169 qpair failed and we were unable to recover it. 00:34:40.169 [2024-07-26 23:04:32.534393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.170 [2024-07-26 23:04:32.534420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.170 qpair failed and we were unable to recover it. 00:34:40.170 [2024-07-26 23:04:32.534591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.170 [2024-07-26 23:04:32.534617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.170 qpair failed and we were unable to recover it. 00:34:40.170 [2024-07-26 23:04:32.534791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.170 [2024-07-26 23:04:32.534817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.170 qpair failed and we were unable to recover it. 00:34:40.170 [2024-07-26 23:04:32.534966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.170 [2024-07-26 23:04:32.534993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.170 qpair failed and we were unable to recover it. 00:34:40.170 [2024-07-26 23:04:32.535192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.170 [2024-07-26 23:04:32.535220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.170 qpair failed and we were unable to recover it. 00:34:40.170 [2024-07-26 23:04:32.535400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.170 [2024-07-26 23:04:32.535427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.170 qpair failed and we were unable to recover it. 00:34:40.170 [2024-07-26 23:04:32.535572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.170 [2024-07-26 23:04:32.535598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.170 qpair failed and we were unable to recover it. 00:34:40.170 [2024-07-26 23:04:32.535808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.170 [2024-07-26 23:04:32.535837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.170 qpair failed and we were unable to recover it. 00:34:40.170 [2024-07-26 23:04:32.536001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.170 [2024-07-26 23:04:32.536028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.170 qpair failed and we were unable to recover it. 00:34:40.170 [2024-07-26 23:04:32.536239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.170 [2024-07-26 23:04:32.536266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.170 qpair failed and we were unable to recover it. 00:34:40.170 [2024-07-26 23:04:32.536475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.170 [2024-07-26 23:04:32.536503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.170 qpair failed and we were unable to recover it. 00:34:40.170 [2024-07-26 23:04:32.536679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.170 [2024-07-26 23:04:32.536705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.170 qpair failed and we were unable to recover it. 00:34:40.170 [2024-07-26 23:04:32.536910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.170 [2024-07-26 23:04:32.536936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.170 qpair failed and we were unable to recover it. 00:34:40.170 [2024-07-26 23:04:32.537102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.170 [2024-07-26 23:04:32.537129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.170 qpair failed and we were unable to recover it. 00:34:40.170 [2024-07-26 23:04:32.537307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.170 [2024-07-26 23:04:32.537336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.170 qpair failed and we were unable to recover it. 00:34:40.170 [2024-07-26 23:04:32.537533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.170 [2024-07-26 23:04:32.537560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.170 qpair failed and we were unable to recover it. 00:34:40.170 [2024-07-26 23:04:32.537735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.170 [2024-07-26 23:04:32.537761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.170 qpair failed and we were unable to recover it. 00:34:40.170 [2024-07-26 23:04:32.537962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.170 [2024-07-26 23:04:32.537987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.170 qpair failed and we were unable to recover it. 00:34:40.170 [2024-07-26 23:04:32.538135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.170 [2024-07-26 23:04:32.538162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.170 qpair failed and we were unable to recover it. 00:34:40.170 [2024-07-26 23:04:32.538333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.170 [2024-07-26 23:04:32.538360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.170 qpair failed and we were unable to recover it. 00:34:40.170 [2024-07-26 23:04:32.538530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.170 [2024-07-26 23:04:32.538556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.170 qpair failed and we were unable to recover it. 00:34:40.170 [2024-07-26 23:04:32.538898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.170 [2024-07-26 23:04:32.538956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.170 qpair failed and we were unable to recover it. 00:34:40.170 [2024-07-26 23:04:32.539204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.170 [2024-07-26 23:04:32.539236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.170 qpair failed and we were unable to recover it. 00:34:40.170 [2024-07-26 23:04:32.539445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.170 [2024-07-26 23:04:32.539498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.170 qpair failed and we were unable to recover it. 00:34:40.170 [2024-07-26 23:04:32.539657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.170 [2024-07-26 23:04:32.539688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.170 qpair failed and we were unable to recover it. 00:34:40.170 [2024-07-26 23:04:32.539917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.170 [2024-07-26 23:04:32.539946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.170 qpair failed and we were unable to recover it. 00:34:40.170 [2024-07-26 23:04:32.540147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.170 [2024-07-26 23:04:32.540174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.170 qpair failed and we were unable to recover it. 00:34:40.170 [2024-07-26 23:04:32.540367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.170 [2024-07-26 23:04:32.540393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.170 qpair failed and we were unable to recover it. 00:34:40.170 [2024-07-26 23:04:32.540585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.170 [2024-07-26 23:04:32.540615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.170 qpair failed and we were unable to recover it. 00:34:40.170 [2024-07-26 23:04:32.540807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.170 [2024-07-26 23:04:32.540837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.170 qpair failed and we were unable to recover it. 00:34:40.170 [2024-07-26 23:04:32.541066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.170 [2024-07-26 23:04:32.541118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.170 qpair failed and we were unable to recover it. 00:34:40.170 [2024-07-26 23:04:32.541292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.170 [2024-07-26 23:04:32.541317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.170 qpair failed and we were unable to recover it. 00:34:40.170 [2024-07-26 23:04:32.541523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.170 [2024-07-26 23:04:32.541552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.170 qpair failed and we were unable to recover it. 00:34:40.170 [2024-07-26 23:04:32.541760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.170 [2024-07-26 23:04:32.541791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.170 qpair failed and we were unable to recover it. 00:34:40.170 [2024-07-26 23:04:32.541977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.170 [2024-07-26 23:04:32.542009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.170 qpair failed and we were unable to recover it. 00:34:40.170 [2024-07-26 23:04:32.542191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.170 [2024-07-26 23:04:32.542217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.170 qpair failed and we were unable to recover it. 00:34:40.170 [2024-07-26 23:04:32.542403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.170 [2024-07-26 23:04:32.542430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.170 qpair failed and we were unable to recover it. 00:34:40.170 [2024-07-26 23:04:32.542631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.542660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.542856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.542886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.543081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.543125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.543348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.543380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.543590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.543619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.543804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.543833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.544049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.544090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.544286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.544314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.544528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.544554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.544750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.544783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.544981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.545007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.545186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.545213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.545392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.545418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.545584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.545612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.545801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.545844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.546007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.546034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.546251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.546277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.546435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.546461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.546763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.546813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.547011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.547037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.547238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.547264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.547450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.547480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.547637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.547665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.547819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.547849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.548082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.548132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.548334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.548365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.548566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.548595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.548779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.548807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.549004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.549033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.549229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.549256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.549474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.549506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.549692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.549721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.549910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.549939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.550116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.550143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.550361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.550393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.550606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.550634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.550837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.550865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.551069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.171 [2024-07-26 23:04:32.551109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.171 qpair failed and we were unable to recover it. 00:34:40.171 [2024-07-26 23:04:32.551307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.551364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.172 qpair failed and we were unable to recover it. 00:34:40.172 [2024-07-26 23:04:32.551651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.551702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.172 qpair failed and we were unable to recover it. 00:34:40.172 [2024-07-26 23:04:32.551895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.551924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.172 qpair failed and we were unable to recover it. 00:34:40.172 [2024-07-26 23:04:32.552104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.552132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.172 qpair failed and we were unable to recover it. 00:34:40.172 [2024-07-26 23:04:32.552306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.552339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.172 qpair failed and we were unable to recover it. 00:34:40.172 [2024-07-26 23:04:32.552544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.552574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.172 qpair failed and we were unable to recover it. 00:34:40.172 [2024-07-26 23:04:32.552726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.552756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.172 qpair failed and we were unable to recover it. 00:34:40.172 [2024-07-26 23:04:32.552976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.553002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.172 qpair failed and we were unable to recover it. 00:34:40.172 [2024-07-26 23:04:32.553220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.553250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.172 qpair failed and we were unable to recover it. 00:34:40.172 [2024-07-26 23:04:32.553474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.553503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.172 qpair failed and we were unable to recover it. 00:34:40.172 [2024-07-26 23:04:32.553710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.553737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.172 qpair failed and we were unable to recover it. 00:34:40.172 [2024-07-26 23:04:32.553936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.553962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.172 qpair failed and we were unable to recover it. 00:34:40.172 [2024-07-26 23:04:32.554154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.554184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.172 qpair failed and we were unable to recover it. 00:34:40.172 [2024-07-26 23:04:32.554404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.554436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.172 qpair failed and we were unable to recover it. 00:34:40.172 [2024-07-26 23:04:32.554675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.554702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.172 qpair failed and we were unable to recover it. 00:34:40.172 [2024-07-26 23:04:32.554880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.554906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.172 qpair failed and we were unable to recover it. 00:34:40.172 [2024-07-26 23:04:32.555084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.555115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.172 qpair failed and we were unable to recover it. 00:34:40.172 [2024-07-26 23:04:32.555301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.555330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.172 qpair failed and we were unable to recover it. 00:34:40.172 [2024-07-26 23:04:32.555496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.555526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.172 qpair failed and we were unable to recover it. 00:34:40.172 [2024-07-26 23:04:32.555743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.555769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.172 qpair failed and we were unable to recover it. 00:34:40.172 [2024-07-26 23:04:32.555966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.555995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.172 qpair failed and we were unable to recover it. 00:34:40.172 [2024-07-26 23:04:32.556221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.556250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.172 qpair failed and we were unable to recover it. 00:34:40.172 [2024-07-26 23:04:32.556413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.556442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.172 qpair failed and we were unable to recover it. 00:34:40.172 [2024-07-26 23:04:32.556664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.556690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.172 qpair failed and we were unable to recover it. 00:34:40.172 [2024-07-26 23:04:32.556886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.556916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.172 qpair failed and we were unable to recover it. 00:34:40.172 [2024-07-26 23:04:32.557090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.557121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.172 qpair failed and we were unable to recover it. 00:34:40.172 [2024-07-26 23:04:32.557298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.557325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.172 qpair failed and we were unable to recover it. 00:34:40.172 [2024-07-26 23:04:32.557521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.557551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.172 qpair failed and we were unable to recover it. 00:34:40.172 [2024-07-26 23:04:32.557787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.557838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.172 qpair failed and we were unable to recover it. 00:34:40.172 [2024-07-26 23:04:32.558053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.558110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.172 qpair failed and we were unable to recover it. 00:34:40.172 [2024-07-26 23:04:32.558302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.558331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.172 qpair failed and we were unable to recover it. 00:34:40.172 [2024-07-26 23:04:32.558539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.558565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.172 qpair failed and we were unable to recover it. 00:34:40.172 [2024-07-26 23:04:32.558786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.558815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.172 qpair failed and we were unable to recover it. 00:34:40.172 [2024-07-26 23:04:32.559027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.559055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.172 qpair failed and we were unable to recover it. 00:34:40.172 [2024-07-26 23:04:32.559299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.559325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.172 qpair failed and we were unable to recover it. 00:34:40.172 [2024-07-26 23:04:32.559521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.559547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.172 qpair failed and we were unable to recover it. 00:34:40.172 [2024-07-26 23:04:32.559724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.559751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.172 qpair failed and we were unable to recover it. 00:34:40.172 [2024-07-26 23:04:32.559945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.172 [2024-07-26 23:04:32.559974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.560162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.173 [2024-07-26 23:04:32.560189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.560362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.173 [2024-07-26 23:04:32.560388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.560598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.173 [2024-07-26 23:04:32.560641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.560818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.173 [2024-07-26 23:04:32.560844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.561055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.173 [2024-07-26 23:04:32.561091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.561246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.173 [2024-07-26 23:04:32.561273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.561464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.173 [2024-07-26 23:04:32.561490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.561656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.173 [2024-07-26 23:04:32.561682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.561828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.173 [2024-07-26 23:04:32.561854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.561993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.173 [2024-07-26 23:04:32.562020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.562207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.173 [2024-07-26 23:04:32.562234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.562377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.173 [2024-07-26 23:04:32.562404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.562630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.173 [2024-07-26 23:04:32.562655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.562823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.173 [2024-07-26 23:04:32.562850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.563052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.173 [2024-07-26 23:04:32.563085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.563276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.173 [2024-07-26 23:04:32.563306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.563506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.173 [2024-07-26 23:04:32.563533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.563702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.173 [2024-07-26 23:04:32.563728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.563903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.173 [2024-07-26 23:04:32.563929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.564139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.173 [2024-07-26 23:04:32.564166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.564338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.173 [2024-07-26 23:04:32.564364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.564564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.173 [2024-07-26 23:04:32.564590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.564762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.173 [2024-07-26 23:04:32.564788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.564988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.173 [2024-07-26 23:04:32.565017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.565215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.173 [2024-07-26 23:04:32.565241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.565438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.173 [2024-07-26 23:04:32.565464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.565684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.173 [2024-07-26 23:04:32.565713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.565872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.173 [2024-07-26 23:04:32.565903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.566093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.173 [2024-07-26 23:04:32.566122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.566308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.173 [2024-07-26 23:04:32.566337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.566534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.173 [2024-07-26 23:04:32.566564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.566774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.173 [2024-07-26 23:04:32.566802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.567018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.173 [2024-07-26 23:04:32.567046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.567275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.173 [2024-07-26 23:04:32.567301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.567490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.173 [2024-07-26 23:04:32.567519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.567701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.173 [2024-07-26 23:04:32.567729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.567922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.173 [2024-07-26 23:04:32.567950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.173 qpair failed and we were unable to recover it. 00:34:40.173 [2024-07-26 23:04:32.568139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.174 [2024-07-26 23:04:32.568165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.174 qpair failed and we were unable to recover it. 00:34:40.174 [2024-07-26 23:04:32.568368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.174 [2024-07-26 23:04:32.568398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.174 qpair failed and we were unable to recover it. 00:34:40.174 [2024-07-26 23:04:32.568592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.174 [2024-07-26 23:04:32.568621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.174 qpair failed and we were unable to recover it. 00:34:40.174 [2024-07-26 23:04:32.568812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.174 [2024-07-26 23:04:32.568841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.174 qpair failed and we were unable to recover it. 00:34:40.174 [2024-07-26 23:04:32.569002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.174 [2024-07-26 23:04:32.569028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.174 qpair failed and we were unable to recover it. 00:34:40.174 [2024-07-26 23:04:32.569260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.174 [2024-07-26 23:04:32.569290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.174 qpair failed and we were unable to recover it. 00:34:40.174 [2024-07-26 23:04:32.569466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.174 [2024-07-26 23:04:32.569492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.174 qpair failed and we were unable to recover it. 00:34:40.174 [2024-07-26 23:04:32.569660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.174 [2024-07-26 23:04:32.569704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.174 qpair failed and we were unable to recover it. 00:34:40.174 [2024-07-26 23:04:32.569893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.174 [2024-07-26 23:04:32.569919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.174 qpair failed and we were unable to recover it. 00:34:40.174 [2024-07-26 23:04:32.570091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.174 [2024-07-26 23:04:32.570118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.174 qpair failed and we were unable to recover it. 00:34:40.174 [2024-07-26 23:04:32.570292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.174 [2024-07-26 23:04:32.570318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.174 qpair failed and we were unable to recover it. 00:34:40.174 [2024-07-26 23:04:32.570489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.174 [2024-07-26 23:04:32.570516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.174 qpair failed and we were unable to recover it. 00:34:40.174 [2024-07-26 23:04:32.570714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.174 [2024-07-26 23:04:32.570740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.174 qpair failed and we were unable to recover it. 00:34:40.174 [2024-07-26 23:04:32.570964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.174 [2024-07-26 23:04:32.570993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.174 qpair failed and we were unable to recover it. 00:34:40.174 [2024-07-26 23:04:32.571192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.174 [2024-07-26 23:04:32.571223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.174 qpair failed and we were unable to recover it. 00:34:40.174 [2024-07-26 23:04:32.571412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.174 [2024-07-26 23:04:32.571440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.174 qpair failed and we were unable to recover it. 00:34:40.174 [2024-07-26 23:04:32.571629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.174 [2024-07-26 23:04:32.571655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.174 qpair failed and we were unable to recover it. 00:34:40.174 [2024-07-26 23:04:32.571861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.174 [2024-07-26 23:04:32.571890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.174 qpair failed and we were unable to recover it. 00:34:40.174 [2024-07-26 23:04:32.572077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.174 [2024-07-26 23:04:32.572117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.174 qpair failed and we were unable to recover it. 00:34:40.174 [2024-07-26 23:04:32.572326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.174 [2024-07-26 23:04:32.572355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.174 qpair failed and we were unable to recover it. 00:34:40.174 [2024-07-26 23:04:32.572530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.174 [2024-07-26 23:04:32.572556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.174 qpair failed and we were unable to recover it. 00:34:40.174 [2024-07-26 23:04:32.572731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.174 [2024-07-26 23:04:32.572757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.174 qpair failed and we were unable to recover it. 00:34:40.174 [2024-07-26 23:04:32.572956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.174 [2024-07-26 23:04:32.572981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.174 qpair failed and we were unable to recover it. 00:34:40.174 [2024-07-26 23:04:32.573171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.174 [2024-07-26 23:04:32.573199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.174 qpair failed and we were unable to recover it. 00:34:40.174 [2024-07-26 23:04:32.573344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.174 [2024-07-26 23:04:32.573371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.174 qpair failed and we were unable to recover it. 00:34:40.174 [2024-07-26 23:04:32.573572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.174 [2024-07-26 23:04:32.573602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.174 qpair failed and we were unable to recover it. 00:34:40.174 [2024-07-26 23:04:32.573763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.174 [2024-07-26 23:04:32.573793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.174 qpair failed and we were unable to recover it. 00:34:40.174 [2024-07-26 23:04:32.573991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.174 [2024-07-26 23:04:32.574016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.174 qpair failed and we were unable to recover it. 00:34:40.174 [2024-07-26 23:04:32.574227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.174 [2024-07-26 23:04:32.574253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.174 qpair failed and we were unable to recover it. 00:34:40.174 [2024-07-26 23:04:32.574482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.174 [2024-07-26 23:04:32.574511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.174 qpair failed and we were unable to recover it. 00:34:40.174 [2024-07-26 23:04:32.574665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.174 [2024-07-26 23:04:32.574693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.174 qpair failed and we were unable to recover it. 00:34:40.174 [2024-07-26 23:04:32.574837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.174 [2024-07-26 23:04:32.574865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.174 qpair failed and we were unable to recover it. 00:34:40.174 [2024-07-26 23:04:32.575040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.575080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.175 qpair failed and we were unable to recover it. 00:34:40.175 [2024-07-26 23:04:32.575233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.575277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.175 qpair failed and we were unable to recover it. 00:34:40.175 [2024-07-26 23:04:32.575470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.575499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.175 qpair failed and we were unable to recover it. 00:34:40.175 [2024-07-26 23:04:32.575664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.575691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.175 qpair failed and we were unable to recover it. 00:34:40.175 [2024-07-26 23:04:32.575877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.575904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.175 qpair failed and we were unable to recover it. 00:34:40.175 [2024-07-26 23:04:32.576099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.576129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.175 qpair failed and we were unable to recover it. 00:34:40.175 [2024-07-26 23:04:32.576310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.576338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.175 qpair failed and we were unable to recover it. 00:34:40.175 [2024-07-26 23:04:32.576500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.576528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.175 qpair failed and we were unable to recover it. 00:34:40.175 [2024-07-26 23:04:32.576743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.576769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.175 qpair failed and we were unable to recover it. 00:34:40.175 [2024-07-26 23:04:32.576940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.576966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.175 qpair failed and we were unable to recover it. 00:34:40.175 [2024-07-26 23:04:32.577182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.577212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.175 qpair failed and we were unable to recover it. 00:34:40.175 [2024-07-26 23:04:32.577435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.577461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.175 qpair failed and we were unable to recover it. 00:34:40.175 [2024-07-26 23:04:32.577608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.577633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.175 qpair failed and we were unable to recover it. 00:34:40.175 [2024-07-26 23:04:32.577806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.577832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.175 qpair failed and we were unable to recover it. 00:34:40.175 [2024-07-26 23:04:32.577997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.578026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.175 qpair failed and we were unable to recover it. 00:34:40.175 [2024-07-26 23:04:32.578231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.578257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.175 qpair failed and we were unable to recover it. 00:34:40.175 [2024-07-26 23:04:32.578459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.578485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.175 qpair failed and we were unable to recover it. 00:34:40.175 [2024-07-26 23:04:32.578704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.578733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.175 qpair failed and we were unable to recover it. 00:34:40.175 [2024-07-26 23:04:32.578949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.578976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.175 qpair failed and we were unable to recover it. 00:34:40.175 [2024-07-26 23:04:32.579175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.579202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.175 qpair failed and we were unable to recover it. 00:34:40.175 [2024-07-26 23:04:32.579374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.579400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.175 qpair failed and we were unable to recover it. 00:34:40.175 [2024-07-26 23:04:32.579617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.579646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.175 qpair failed and we were unable to recover it. 00:34:40.175 [2024-07-26 23:04:32.579828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.579857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.175 qpair failed and we were unable to recover it. 00:34:40.175 [2024-07-26 23:04:32.580079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.580109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.175 qpair failed and we were unable to recover it. 00:34:40.175 [2024-07-26 23:04:32.580298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.580324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.175 qpair failed and we were unable to recover it. 00:34:40.175 [2024-07-26 23:04:32.580549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.580578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.175 qpair failed and we were unable to recover it. 00:34:40.175 [2024-07-26 23:04:32.580768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.580808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.175 qpair failed and we were unable to recover it. 00:34:40.175 [2024-07-26 23:04:32.580986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.581017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.175 qpair failed and we were unable to recover it. 00:34:40.175 [2024-07-26 23:04:32.581241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.581268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.175 qpair failed and we were unable to recover it. 00:34:40.175 [2024-07-26 23:04:32.581444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.581471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.175 qpair failed and we were unable to recover it. 00:34:40.175 [2024-07-26 23:04:32.581640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.581666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.175 qpair failed and we were unable to recover it. 00:34:40.175 [2024-07-26 23:04:32.581908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.581934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.175 qpair failed and we were unable to recover it. 00:34:40.175 [2024-07-26 23:04:32.582144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.582170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.175 qpair failed and we were unable to recover it. 00:34:40.175 [2024-07-26 23:04:32.582395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.582425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.175 qpair failed and we were unable to recover it. 00:34:40.175 [2024-07-26 23:04:32.582613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.582642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.175 qpair failed and we were unable to recover it. 00:34:40.175 [2024-07-26 23:04:32.582810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.582838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.175 qpair failed and we were unable to recover it. 00:34:40.175 [2024-07-26 23:04:32.583068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.583095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.175 qpair failed and we were unable to recover it. 00:34:40.175 [2024-07-26 23:04:32.583318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.175 [2024-07-26 23:04:32.583347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.583568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.176 [2024-07-26 23:04:32.583597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.583787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.176 [2024-07-26 23:04:32.583815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.584037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.176 [2024-07-26 23:04:32.584080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.584242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.176 [2024-07-26 23:04:32.584269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.584455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.176 [2024-07-26 23:04:32.584485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.584668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.176 [2024-07-26 23:04:32.584696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.584865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.176 [2024-07-26 23:04:32.584890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.585117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.176 [2024-07-26 23:04:32.585147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.585365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.176 [2024-07-26 23:04:32.585393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.585578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.176 [2024-07-26 23:04:32.585606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.585772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.176 [2024-07-26 23:04:32.585798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.585997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.176 [2024-07-26 23:04:32.586023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.586216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.176 [2024-07-26 23:04:32.586245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.586407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.176 [2024-07-26 23:04:32.586438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.586635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.176 [2024-07-26 23:04:32.586660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.586860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.176 [2024-07-26 23:04:32.586886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.587099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.176 [2024-07-26 23:04:32.587128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.587349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.176 [2024-07-26 23:04:32.587378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.587577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.176 [2024-07-26 23:04:32.587602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.587784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.176 [2024-07-26 23:04:32.587811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.588000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.176 [2024-07-26 23:04:32.588028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.588223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.176 [2024-07-26 23:04:32.588249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.588420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.176 [2024-07-26 23:04:32.588447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.588638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.176 [2024-07-26 23:04:32.588666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.588836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.176 [2024-07-26 23:04:32.588864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.589048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.176 [2024-07-26 23:04:32.589084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.589281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.176 [2024-07-26 23:04:32.589306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.589482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.176 [2024-07-26 23:04:32.589510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.589745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.176 [2024-07-26 23:04:32.589771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.589971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.176 [2024-07-26 23:04:32.589999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.590172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.176 [2024-07-26 23:04:32.590199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.590372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.176 [2024-07-26 23:04:32.590398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.590568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.176 [2024-07-26 23:04:32.590594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.590787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.176 [2024-07-26 23:04:32.590816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.591006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.176 [2024-07-26 23:04:32.591035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.591201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.176 [2024-07-26 23:04:32.591228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.591426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.176 [2024-07-26 23:04:32.591452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.176 qpair failed and we were unable to recover it. 00:34:40.176 [2024-07-26 23:04:32.591652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.177 [2024-07-26 23:04:32.591680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.177 qpair failed and we were unable to recover it. 00:34:40.177 [2024-07-26 23:04:32.591874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.177 [2024-07-26 23:04:32.591900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.177 qpair failed and we were unable to recover it. 00:34:40.177 [2024-07-26 23:04:32.592097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.177 [2024-07-26 23:04:32.592127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.177 qpair failed and we were unable to recover it. 00:34:40.177 [2024-07-26 23:04:32.592318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.177 [2024-07-26 23:04:32.592346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.177 qpair failed and we were unable to recover it. 00:34:40.177 [2024-07-26 23:04:32.592500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.177 [2024-07-26 23:04:32.592528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.177 qpair failed and we were unable to recover it. 00:34:40.177 [2024-07-26 23:04:32.592717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.177 [2024-07-26 23:04:32.592746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.177 qpair failed and we were unable to recover it. 00:34:40.177 [2024-07-26 23:04:32.592938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.177 [2024-07-26 23:04:32.592968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.177 qpair failed and we were unable to recover it. 00:34:40.177 [2024-07-26 23:04:32.593151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.177 [2024-07-26 23:04:32.593180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.177 qpair failed and we were unable to recover it. 00:34:40.177 [2024-07-26 23:04:32.593368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.177 [2024-07-26 23:04:32.593394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.177 qpair failed and we were unable to recover it. 00:34:40.177 [2024-07-26 23:04:32.593595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.177 [2024-07-26 23:04:32.593620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.177 qpair failed and we were unable to recover it. 00:34:40.177 [2024-07-26 23:04:32.593812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.177 [2024-07-26 23:04:32.593842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.177 qpair failed and we were unable to recover it. 00:34:40.177 [2024-07-26 23:04:32.594020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.177 [2024-07-26 23:04:32.594048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.177 qpair failed and we were unable to recover it. 00:34:40.177 [2024-07-26 23:04:32.594250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.177 [2024-07-26 23:04:32.594279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.177 qpair failed and we were unable to recover it. 00:34:40.177 [2024-07-26 23:04:32.594460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.177 [2024-07-26 23:04:32.594486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.177 qpair failed and we were unable to recover it. 00:34:40.177 [2024-07-26 23:04:32.594685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.177 [2024-07-26 23:04:32.594714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.177 qpair failed and we were unable to recover it. 00:34:40.177 [2024-07-26 23:04:32.594903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.177 [2024-07-26 23:04:32.594932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.177 qpair failed and we were unable to recover it. 00:34:40.177 [2024-07-26 23:04:32.595146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.177 [2024-07-26 23:04:32.595176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.177 qpair failed and we were unable to recover it. 00:34:40.177 [2024-07-26 23:04:32.595357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.177 [2024-07-26 23:04:32.595384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.177 qpair failed and we were unable to recover it. 00:34:40.177 [2024-07-26 23:04:32.595534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.177 [2024-07-26 23:04:32.595562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.177 qpair failed and we were unable to recover it. 00:34:40.177 [2024-07-26 23:04:32.595762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.177 [2024-07-26 23:04:32.595790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.177 qpair failed and we were unable to recover it. 00:34:40.177 [2024-07-26 23:04:32.595978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.177 [2024-07-26 23:04:32.596007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.177 qpair failed and we were unable to recover it. 00:34:40.177 [2024-07-26 23:04:32.596220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.177 [2024-07-26 23:04:32.596247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.177 qpair failed and we were unable to recover it. 00:34:40.177 [2024-07-26 23:04:32.596466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.177 [2024-07-26 23:04:32.596494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.177 qpair failed and we were unable to recover it. 00:34:40.177 [2024-07-26 23:04:32.596719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.177 [2024-07-26 23:04:32.596748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.177 qpair failed and we were unable to recover it. 00:34:40.177 [2024-07-26 23:04:32.596971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.177 [2024-07-26 23:04:32.596998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.177 qpair failed and we were unable to recover it. 00:34:40.177 [2024-07-26 23:04:32.597196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.177 [2024-07-26 23:04:32.597223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.177 qpair failed and we were unable to recover it. 00:34:40.177 [2024-07-26 23:04:32.597427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.177 [2024-07-26 23:04:32.597454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.177 qpair failed and we were unable to recover it. 00:34:40.177 [2024-07-26 23:04:32.597640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.177 [2024-07-26 23:04:32.597666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.177 qpair failed and we were unable to recover it. 00:34:40.177 [2024-07-26 23:04:32.597836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.177 [2024-07-26 23:04:32.597864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.177 qpair failed and we were unable to recover it. 00:34:40.177 [2024-07-26 23:04:32.598078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.177 [2024-07-26 23:04:32.598105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.177 qpair failed and we were unable to recover it. 00:34:40.177 [2024-07-26 23:04:32.598331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.177 [2024-07-26 23:04:32.598360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.177 qpair failed and we were unable to recover it. 00:34:40.177 [2024-07-26 23:04:32.598574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.177 [2024-07-26 23:04:32.598602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.177 qpair failed and we were unable to recover it. 00:34:40.177 [2024-07-26 23:04:32.598804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.177 [2024-07-26 23:04:32.598835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.177 qpair failed and we were unable to recover it. 00:34:40.177 [2024-07-26 23:04:32.599052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.177 [2024-07-26 23:04:32.599088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.177 qpair failed and we were unable to recover it. 00:34:40.177 [2024-07-26 23:04:32.599294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.177 [2024-07-26 23:04:32.599319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.177 qpair failed and we were unable to recover it. 00:34:40.177 [2024-07-26 23:04:32.599490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.177 [2024-07-26 23:04:32.599515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.177 qpair failed and we were unable to recover it. 00:34:40.177 [2024-07-26 23:04:32.599712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.177 [2024-07-26 23:04:32.599740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.177 qpair failed and we were unable to recover it. 00:34:40.177 [2024-07-26 23:04:32.599954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.178 [2024-07-26 23:04:32.599982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.178 qpair failed and we were unable to recover it. 00:34:40.178 [2024-07-26 23:04:32.600175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.178 [2024-07-26 23:04:32.600202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.178 qpair failed and we were unable to recover it. 00:34:40.178 [2024-07-26 23:04:32.600392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.178 [2024-07-26 23:04:32.600420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.178 qpair failed and we were unable to recover it. 00:34:40.178 [2024-07-26 23:04:32.600584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.178 [2024-07-26 23:04:32.600612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.178 qpair failed and we were unable to recover it. 00:34:40.178 [2024-07-26 23:04:32.600801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.178 [2024-07-26 23:04:32.600826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.178 qpair failed and we were unable to recover it. 00:34:40.178 [2024-07-26 23:04:32.601016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.178 [2024-07-26 23:04:32.601045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.178 qpair failed and we were unable to recover it. 00:34:40.178 [2024-07-26 23:04:32.601253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.178 [2024-07-26 23:04:32.601282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.178 qpair failed and we were unable to recover it. 00:34:40.178 [2024-07-26 23:04:32.601459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.178 [2024-07-26 23:04:32.601484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.178 qpair failed and we were unable to recover it. 00:34:40.462 [2024-07-26 23:04:32.601686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.462 [2024-07-26 23:04:32.601717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.462 qpair failed and we were unable to recover it. 00:34:40.462 [2024-07-26 23:04:32.601942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.462 [2024-07-26 23:04:32.601972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.462 qpair failed and we were unable to recover it. 00:34:40.462 [2024-07-26 23:04:32.602189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.462 [2024-07-26 23:04:32.602220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.462 qpair failed and we were unable to recover it. 00:34:40.462 [2024-07-26 23:04:32.602446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.462 [2024-07-26 23:04:32.602475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.462 qpair failed and we were unable to recover it. 00:34:40.462 [2024-07-26 23:04:32.602695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.462 [2024-07-26 23:04:32.602720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.462 qpair failed and we were unable to recover it. 00:34:40.462 [2024-07-26 23:04:32.602876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.462 [2024-07-26 23:04:32.602902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.462 qpair failed and we were unable to recover it. 00:34:40.462 [2024-07-26 23:04:32.603075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.462 [2024-07-26 23:04:32.603102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.462 qpair failed and we were unable to recover it. 00:34:40.462 [2024-07-26 23:04:32.603298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.462 [2024-07-26 23:04:32.603327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.462 qpair failed and we were unable to recover it. 00:34:40.462 [2024-07-26 23:04:32.603518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.462 [2024-07-26 23:04:32.603544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.462 qpair failed and we were unable to recover it. 00:34:40.462 [2024-07-26 23:04:32.603717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.462 [2024-07-26 23:04:32.603747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.462 qpair failed and we were unable to recover it. 00:34:40.462 [2024-07-26 23:04:32.603945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.603971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.604112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.604157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.604335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.604363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.604591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.604620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.604816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.604845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.605016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.605041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.605226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.605251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.605400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.605427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.605652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.605681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.605894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.605923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.606127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.606153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.606323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.606350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.606515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.606544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.607615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.607651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.607829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.607858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.608066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.608097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.608255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.608284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.608441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.608470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.608675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.608701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.608901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.608930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.609104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.609131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.609280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.609306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.609466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.609492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.609670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.609697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.609905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.609934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.610138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.610165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.610359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.610385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.610585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.610614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.610813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.610839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.611036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.611072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.611249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.611279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.611437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.611463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.611655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.611683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.611868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.611897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.612096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.612124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.612324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.612353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.612580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.612606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.612749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.612775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.612970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.612996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.613169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.613196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.613371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.613397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.613592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.613617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.613859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.613884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.614050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.614086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.614281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.614311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.614530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.614559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.614754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.614779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.614972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.615002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.615203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.615230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.615405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.615432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.615597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.615622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.615904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.615955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.616173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.616199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.616353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.616378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.616551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.616576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.616748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.616776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.616957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.616987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.617207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.617237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.463 qpair failed and we were unable to recover it. 00:34:40.463 [2024-07-26 23:04:32.617441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.463 [2024-07-26 23:04:32.617467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.617718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.617745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.617891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.617917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.618097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.618124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.618300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.618326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.618655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.618704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.618856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.618886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.619074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.619104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.619331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.619357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.619559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.619588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.619782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.619812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.620027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.620055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.620291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.620331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.620558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.620587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.620772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.620801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.620992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.621018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.621231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.621258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.621488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.621518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.621705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.621736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.621959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.621987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.622179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.622206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.622426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.622457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.622658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.622687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.622903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.622932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.623135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.623162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.623372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.623402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.623569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.623599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.623772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.623798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.623993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.624019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.624209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.624240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.624465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.624491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.624658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.624684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.624855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.624882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.625103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.625138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.625306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.625349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.625534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.625564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.625786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.625812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.626012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.626041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.626231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.626257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.626450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.626478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.626671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.626697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.626929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.626957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.627150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.627180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.627393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.627421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.627623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.464 [2024-07-26 23:04:32.627650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.464 qpair failed and we were unable to recover it. 00:34:40.464 [2024-07-26 23:04:32.627802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.627828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.465 qpair failed and we were unable to recover it. 00:34:40.465 [2024-07-26 23:04:32.628028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.628054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.465 qpair failed and we were unable to recover it. 00:34:40.465 [2024-07-26 23:04:32.628304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.628332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.465 qpair failed and we were unable to recover it. 00:34:40.465 [2024-07-26 23:04:32.628502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.628530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.465 qpair failed and we were unable to recover it. 00:34:40.465 [2024-07-26 23:04:32.628686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.628712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.465 qpair failed and we were unable to recover it. 00:34:40.465 [2024-07-26 23:04:32.628890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.628916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.465 qpair failed and we were unable to recover it. 00:34:40.465 [2024-07-26 23:04:32.629117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.629147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.465 qpair failed and we were unable to recover it. 00:34:40.465 [2024-07-26 23:04:32.629323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.629353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.465 qpair failed and we were unable to recover it. 00:34:40.465 [2024-07-26 23:04:32.629507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.629533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.465 qpair failed and we were unable to recover it. 00:34:40.465 [2024-07-26 23:04:32.629721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.629751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.465 qpair failed and we were unable to recover it. 00:34:40.465 [2024-07-26 23:04:32.629937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.629965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.465 qpair failed and we were unable to recover it. 00:34:40.465 [2024-07-26 23:04:32.630137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.630164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.465 qpair failed and we were unable to recover it. 00:34:40.465 [2024-07-26 23:04:32.630316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.630342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.465 qpair failed and we were unable to recover it. 00:34:40.465 [2024-07-26 23:04:32.630541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.630567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.465 qpair failed and we were unable to recover it. 00:34:40.465 [2024-07-26 23:04:32.630769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.630800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.465 qpair failed and we were unable to recover it. 00:34:40.465 [2024-07-26 23:04:32.630996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.631022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.465 qpair failed and we were unable to recover it. 00:34:40.465 [2024-07-26 23:04:32.631203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.631233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.465 qpair failed and we were unable to recover it. 00:34:40.465 [2024-07-26 23:04:32.631426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.631455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.465 qpair failed and we were unable to recover it. 00:34:40.465 [2024-07-26 23:04:32.631644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.631673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.465 qpair failed and we were unable to recover it. 00:34:40.465 [2024-07-26 23:04:32.631852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.631878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.465 qpair failed and we were unable to recover it. 00:34:40.465 [2024-07-26 23:04:32.632040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.632074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.465 qpair failed and we were unable to recover it. 00:34:40.465 [2024-07-26 23:04:32.632278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.632307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.465 qpair failed and we were unable to recover it. 00:34:40.465 [2024-07-26 23:04:32.632531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.632558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.465 qpair failed and we were unable to recover it. 00:34:40.465 [2024-07-26 23:04:32.632727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.632753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.465 qpair failed and we were unable to recover it. 00:34:40.465 [2024-07-26 23:04:32.632924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.632951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.465 qpair failed and we were unable to recover it. 00:34:40.465 [2024-07-26 23:04:32.633156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.633184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.465 qpair failed and we were unable to recover it. 00:34:40.465 [2024-07-26 23:04:32.633355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.633382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.465 qpair failed and we were unable to recover it. 00:34:40.465 [2024-07-26 23:04:32.633551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.633577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.465 qpair failed and we were unable to recover it. 00:34:40.465 [2024-07-26 23:04:32.633772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.633798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.465 qpair failed and we were unable to recover it. 00:34:40.465 [2024-07-26 23:04:32.633970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.633999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.465 qpair failed and we were unable to recover it. 00:34:40.465 [2024-07-26 23:04:32.634211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.634241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.465 qpair failed and we were unable to recover it. 00:34:40.465 [2024-07-26 23:04:32.634433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.634458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.465 qpair failed and we were unable to recover it. 00:34:40.465 [2024-07-26 23:04:32.634683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.634712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.465 qpair failed and we were unable to recover it. 00:34:40.465 [2024-07-26 23:04:32.634932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.634958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:40.465 qpair failed and we were unable to recover it. 00:34:40.465 [2024-07-26 23:04:32.635175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.635219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.465 qpair failed and we were unable to recover it. 00:34:40.465 [2024-07-26 23:04:32.635460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.635487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.465 qpair failed and we were unable to recover it. 00:34:40.465 [2024-07-26 23:04:32.635652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.635681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.465 qpair failed and we were unable to recover it. 00:34:40.465 [2024-07-26 23:04:32.635847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.465 [2024-07-26 23:04:32.635877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.466 qpair failed and we were unable to recover it. 00:34:40.466 [2024-07-26 23:04:32.636088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.466 [2024-07-26 23:04:32.636117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.466 qpair failed and we were unable to recover it. 00:34:40.466 [2024-07-26 23:04:32.636273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.466 [2024-07-26 23:04:32.636299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.466 qpair failed and we were unable to recover it. 00:34:40.466 [2024-07-26 23:04:32.636471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.466 [2024-07-26 23:04:32.636499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.466 qpair failed and we were unable to recover it. 00:34:40.466 [2024-07-26 23:04:32.636714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.466 [2024-07-26 23:04:32.636742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.466 qpair failed and we were unable to recover it. 00:34:40.466 [2024-07-26 23:04:32.636923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.466 [2024-07-26 23:04:32.636950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.466 qpair failed and we were unable to recover it. 00:34:40.466 [2024-07-26 23:04:32.637148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.466 [2024-07-26 23:04:32.637175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.466 qpair failed and we were unable to recover it. 00:34:40.466 [2024-07-26 23:04:32.637398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.466 [2024-07-26 23:04:32.637427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.466 qpair failed and we were unable to recover it. 00:34:40.466 [2024-07-26 23:04:32.637658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.466 [2024-07-26 23:04:32.637683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.466 qpair failed and we were unable to recover it. 00:34:40.466 [2024-07-26 23:04:32.637901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.466 [2024-07-26 23:04:32.637929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.466 qpair failed and we were unable to recover it. 00:34:40.466 [2024-07-26 23:04:32.638129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.466 [2024-07-26 23:04:32.638155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.466 qpair failed and we were unable to recover it. 00:34:40.466 [2024-07-26 23:04:32.638348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.466 [2024-07-26 23:04:32.638376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.466 qpair failed and we were unable to recover it. 00:34:40.466 [2024-07-26 23:04:32.638562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.466 [2024-07-26 23:04:32.638590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.466 qpair failed and we were unable to recover it. 00:34:40.466 [2024-07-26 23:04:32.638748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.466 [2024-07-26 23:04:32.638776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.466 qpair failed and we were unable to recover it. 00:34:40.466 [2024-07-26 23:04:32.638947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.466 [2024-07-26 23:04:32.638972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.466 qpair failed and we were unable to recover it. 00:34:40.466 [2024-07-26 23:04:32.639167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.466 [2024-07-26 23:04:32.639193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.466 qpair failed and we were unable to recover it. 00:34:40.466 [2024-07-26 23:04:32.639402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.466 [2024-07-26 23:04:32.639427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.466 qpair failed and we were unable to recover it. 00:34:40.466 [2024-07-26 23:04:32.639614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.466 [2024-07-26 23:04:32.639642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.466 qpair failed and we were unable to recover it. 00:34:40.466 [2024-07-26 23:04:32.639831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.466 [2024-07-26 23:04:32.639857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.466 qpair failed and we were unable to recover it. 00:34:40.466 [2024-07-26 23:04:32.640081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.467 [2024-07-26 23:04:32.640110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.467 qpair failed and we were unable to recover it. 00:34:40.467 [2024-07-26 23:04:32.640328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.467 [2024-07-26 23:04:32.640356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.467 qpair failed and we were unable to recover it. 00:34:40.467 [2024-07-26 23:04:32.640544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.467 [2024-07-26 23:04:32.640574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.467 qpair failed and we were unable to recover it. 00:34:40.467 [2024-07-26 23:04:32.640773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.467 [2024-07-26 23:04:32.640798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.467 qpair failed and we were unable to recover it. 00:34:40.467 [2024-07-26 23:04:32.641002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.467 [2024-07-26 23:04:32.641028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.467 qpair failed and we were unable to recover it. 00:34:40.467 [2024-07-26 23:04:32.641261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.467 [2024-07-26 23:04:32.641291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.467 qpair failed and we were unable to recover it. 00:34:40.467 [2024-07-26 23:04:32.641436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.467 [2024-07-26 23:04:32.641462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.467 qpair failed and we were unable to recover it. 00:34:40.467 [2024-07-26 23:04:32.641633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.467 [2024-07-26 23:04:32.641658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.467 qpair failed and we were unable to recover it. 00:34:40.467 [2024-07-26 23:04:32.641856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.467 [2024-07-26 23:04:32.641882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.467 qpair failed and we were unable to recover it. 00:34:40.467 [2024-07-26 23:04:32.642136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.467 [2024-07-26 23:04:32.642162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.467 qpair failed and we were unable to recover it. 00:34:40.467 [2024-07-26 23:04:32.642376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.467 [2024-07-26 23:04:32.642404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.467 qpair failed and we were unable to recover it. 00:34:40.467 [2024-07-26 23:04:32.642601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.467 [2024-07-26 23:04:32.642626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.467 qpair failed and we were unable to recover it. 00:34:40.467 [2024-07-26 23:04:32.642851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.467 [2024-07-26 23:04:32.642880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.467 qpair failed and we were unable to recover it. 00:34:40.467 [2024-07-26 23:04:32.643034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.467 [2024-07-26 23:04:32.643068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.467 qpair failed and we were unable to recover it. 00:34:40.467 [2024-07-26 23:04:32.643291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.467 [2024-07-26 23:04:32.643317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.467 qpair failed and we were unable to recover it. 00:34:40.467 [2024-07-26 23:04:32.643526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.467 [2024-07-26 23:04:32.643551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.467 qpair failed and we were unable to recover it. 00:34:40.467 [2024-07-26 23:04:32.643704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.467 [2024-07-26 23:04:32.643730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.467 qpair failed and we were unable to recover it. 00:34:40.467 [2024-07-26 23:04:32.643923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.467 [2024-07-26 23:04:32.643951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.467 qpair failed and we were unable to recover it. 00:34:40.467 [2024-07-26 23:04:32.644136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.467 [2024-07-26 23:04:32.644165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.467 qpair failed and we were unable to recover it. 00:34:40.467 [2024-07-26 23:04:32.644360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.467 [2024-07-26 23:04:32.644386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.467 qpair failed and we were unable to recover it. 00:34:40.467 [2024-07-26 23:04:32.644578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.467 [2024-07-26 23:04:32.644606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.467 qpair failed and we were unable to recover it. 00:34:40.468 [2024-07-26 23:04:32.644783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.468 [2024-07-26 23:04:32.644811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.468 qpair failed and we were unable to recover it. 00:34:40.468 [2024-07-26 23:04:32.644969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.468 [2024-07-26 23:04:32.644997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.468 qpair failed and we were unable to recover it. 00:34:40.468 [2024-07-26 23:04:32.645169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.468 [2024-07-26 23:04:32.645195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.468 qpair failed and we were unable to recover it. 00:34:40.468 [2024-07-26 23:04:32.645369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.468 [2024-07-26 23:04:32.645395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.468 qpair failed and we were unable to recover it. 00:34:40.468 [2024-07-26 23:04:32.645563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.468 [2024-07-26 23:04:32.645591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.468 qpair failed and we were unable to recover it. 00:34:40.468 [2024-07-26 23:04:32.645753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.468 [2024-07-26 23:04:32.645781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.468 qpair failed and we were unable to recover it. 00:34:40.468 [2024-07-26 23:04:32.645947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.468 [2024-07-26 23:04:32.645973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.468 qpair failed and we were unable to recover it. 00:34:40.468 [2024-07-26 23:04:32.646144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.468 [2024-07-26 23:04:32.646170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.468 qpair failed and we were unable to recover it. 00:34:40.468 [2024-07-26 23:04:32.646341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.468 [2024-07-26 23:04:32.646369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.468 qpair failed and we were unable to recover it. 00:34:40.468 [2024-07-26 23:04:32.646557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.468 [2024-07-26 23:04:32.646585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.468 qpair failed and we were unable to recover it. 00:34:40.468 [2024-07-26 23:04:32.646769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.468 [2024-07-26 23:04:32.646794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.468 qpair failed and we were unable to recover it. 00:34:40.468 [2024-07-26 23:04:32.646990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.468 [2024-07-26 23:04:32.647023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.468 qpair failed and we were unable to recover it. 00:34:40.468 [2024-07-26 23:04:32.647228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.468 [2024-07-26 23:04:32.647254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.468 qpair failed and we were unable to recover it. 00:34:40.468 [2024-07-26 23:04:32.647414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.468 [2024-07-26 23:04:32.647442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.468 qpair failed and we were unable to recover it. 00:34:40.468 [2024-07-26 23:04:32.647610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.468 [2024-07-26 23:04:32.647635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.468 qpair failed and we were unable to recover it. 00:34:40.468 [2024-07-26 23:04:32.647795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.468 [2024-07-26 23:04:32.647862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.468 qpair failed and we were unable to recover it. 00:34:40.468 [2024-07-26 23:04:32.648053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.468 [2024-07-26 23:04:32.648088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.468 qpair failed and we were unable to recover it. 00:34:40.468 [2024-07-26 23:04:32.648278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.468 [2024-07-26 23:04:32.648303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.468 qpair failed and we were unable to recover it. 00:34:40.468 [2024-07-26 23:04:32.648475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.468 [2024-07-26 23:04:32.648500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.468 qpair failed and we were unable to recover it. 00:34:40.468 [2024-07-26 23:04:32.648695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.468 [2024-07-26 23:04:32.648723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.468 qpair failed and we were unable to recover it. 00:34:40.468 [2024-07-26 23:04:32.648906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.468 [2024-07-26 23:04:32.648934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.468 qpair failed and we were unable to recover it. 00:34:40.468 [2024-07-26 23:04:32.649121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.468 [2024-07-26 23:04:32.649150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.468 qpair failed and we were unable to recover it. 00:34:40.468 [2024-07-26 23:04:32.649366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.468 [2024-07-26 23:04:32.649391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.468 qpair failed and we were unable to recover it. 00:34:40.468 [2024-07-26 23:04:32.649550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.468 [2024-07-26 23:04:32.649579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.468 qpair failed and we were unable to recover it. 00:34:40.468 [2024-07-26 23:04:32.649758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.468 [2024-07-26 23:04:32.649786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.468 qpair failed and we were unable to recover it. 00:34:40.468 [2024-07-26 23:04:32.649978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.468 [2024-07-26 23:04:32.650006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.468 qpair failed and we were unable to recover it. 00:34:40.468 [2024-07-26 23:04:32.650177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.468 [2024-07-26 23:04:32.650203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.468 qpair failed and we were unable to recover it. 00:34:40.468 [2024-07-26 23:04:32.650376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.468 [2024-07-26 23:04:32.650401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.468 qpair failed and we were unable to recover it. 00:34:40.468 [2024-07-26 23:04:32.650622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.468 [2024-07-26 23:04:32.650650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.468 qpair failed and we were unable to recover it. 00:34:40.468 [2024-07-26 23:04:32.650836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.468 [2024-07-26 23:04:32.650863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.468 qpair failed and we were unable to recover it. 00:34:40.468 [2024-07-26 23:04:32.651077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.468 [2024-07-26 23:04:32.651119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.468 qpair failed and we were unable to recover it. 00:34:40.468 [2024-07-26 23:04:32.651298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.468 [2024-07-26 23:04:32.651324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.468 qpair failed and we were unable to recover it. 00:34:40.468 [2024-07-26 23:04:32.651549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.468 [2024-07-26 23:04:32.651577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.468 qpair failed and we were unable to recover it. 00:34:40.468 [2024-07-26 23:04:32.651741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.468 [2024-07-26 23:04:32.651767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.468 qpair failed and we were unable to recover it. 00:34:40.468 [2024-07-26 23:04:32.651964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.468 [2024-07-26 23:04:32.651992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.468 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.652198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.469 [2024-07-26 23:04:32.652224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.469 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.652419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.469 [2024-07-26 23:04:32.652447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.469 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.652633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.469 [2024-07-26 23:04:32.652661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.469 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.652827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.469 [2024-07-26 23:04:32.652856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.469 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.653008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.469 [2024-07-26 23:04:32.653051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.469 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.653250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.469 [2024-07-26 23:04:32.653276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.469 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.653509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.469 [2024-07-26 23:04:32.653537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.469 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.653730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.469 [2024-07-26 23:04:32.653755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.469 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.653950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.469 [2024-07-26 23:04:32.653978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.469 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.654137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.469 [2024-07-26 23:04:32.654166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.469 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.654332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.469 [2024-07-26 23:04:32.654356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.469 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.654501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.469 [2024-07-26 23:04:32.654526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.469 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.654714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.469 [2024-07-26 23:04:32.654742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.469 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.654919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.469 [2024-07-26 23:04:32.654947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.469 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.655143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.469 [2024-07-26 23:04:32.655168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.469 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.655362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.469 [2024-07-26 23:04:32.655387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.469 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.655576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.469 [2024-07-26 23:04:32.655605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.469 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.655795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.469 [2024-07-26 23:04:32.655823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.469 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.656012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.469 [2024-07-26 23:04:32.656040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.469 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.656242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.469 [2024-07-26 23:04:32.656267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.469 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.656463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.469 [2024-07-26 23:04:32.656492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.469 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.656666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.469 [2024-07-26 23:04:32.656691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.469 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.656878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.469 [2024-07-26 23:04:32.656905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.469 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.657095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.469 [2024-07-26 23:04:32.657120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.469 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.657270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.469 [2024-07-26 23:04:32.657295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.469 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.657494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.469 [2024-07-26 23:04:32.657519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.469 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.657693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.469 [2024-07-26 23:04:32.657720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.469 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.657905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.469 [2024-07-26 23:04:32.657930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.469 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.658120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.469 [2024-07-26 23:04:32.658149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.469 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.658327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.469 [2024-07-26 23:04:32.658355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.469 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.658537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.469 [2024-07-26 23:04:32.658565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.469 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.658755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.469 [2024-07-26 23:04:32.658781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.469 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.659006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.469 [2024-07-26 23:04:32.659034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.469 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.659266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.469 [2024-07-26 23:04:32.659291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.469 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.659504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.469 [2024-07-26 23:04:32.659532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.469 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.659721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.469 [2024-07-26 23:04:32.659746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.469 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.659888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.469 [2024-07-26 23:04:32.659913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.469 qpair failed and we were unable to recover it. 00:34:40.469 [2024-07-26 23:04:32.660050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.470 [2024-07-26 23:04:32.660082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.470 qpair failed and we were unable to recover it. 00:34:40.470 [2024-07-26 23:04:32.660250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.470 [2024-07-26 23:04:32.660277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.470 qpair failed and we were unable to recover it. 00:34:40.470 [2024-07-26 23:04:32.660455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.470 [2024-07-26 23:04:32.660480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.470 qpair failed and we were unable to recover it. 00:34:40.470 [2024-07-26 23:04:32.660664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.470 [2024-07-26 23:04:32.660692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.470 qpair failed and we were unable to recover it. 00:34:40.470 [2024-07-26 23:04:32.660909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.470 [2024-07-26 23:04:32.660937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.470 qpair failed and we were unable to recover it. 00:34:40.470 [2024-07-26 23:04:32.661136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.470 [2024-07-26 23:04:32.661162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.470 qpair failed and we were unable to recover it. 00:34:40.470 [2024-07-26 23:04:32.661329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.470 [2024-07-26 23:04:32.661354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.470 qpair failed and we were unable to recover it. 00:34:40.470 [2024-07-26 23:04:32.661624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.470 [2024-07-26 23:04:32.661679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.470 qpair failed and we were unable to recover it. 00:34:40.470 [2024-07-26 23:04:32.661901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.470 [2024-07-26 23:04:32.661926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.470 qpair failed and we were unable to recover it. 00:34:40.470 [2024-07-26 23:04:32.662132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.470 [2024-07-26 23:04:32.662158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.470 qpair failed and we were unable to recover it. 00:34:40.470 [2024-07-26 23:04:32.662363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.470 [2024-07-26 23:04:32.662387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.470 qpair failed and we were unable to recover it. 00:34:40.470 [2024-07-26 23:04:32.662613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.470 [2024-07-26 23:04:32.662641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.470 qpair failed and we were unable to recover it. 00:34:40.470 [2024-07-26 23:04:32.662901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.470 [2024-07-26 23:04:32.662952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.470 qpair failed and we were unable to recover it. 00:34:40.470 [2024-07-26 23:04:32.663138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.470 [2024-07-26 23:04:32.663166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.470 qpair failed and we were unable to recover it. 00:34:40.470 [2024-07-26 23:04:32.663334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.470 [2024-07-26 23:04:32.663359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.470 qpair failed and we were unable to recover it. 00:34:40.470 [2024-07-26 23:04:32.663554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.470 [2024-07-26 23:04:32.663580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.470 qpair failed and we were unable to recover it. 00:34:40.470 [2024-07-26 23:04:32.663795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.470 [2024-07-26 23:04:32.663823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.470 qpair failed and we were unable to recover it. 00:34:40.470 [2024-07-26 23:04:32.664010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.470 [2024-07-26 23:04:32.664037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.470 qpair failed and we were unable to recover it. 00:34:40.470 [2024-07-26 23:04:32.664235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.470 [2024-07-26 23:04:32.664260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.470 qpair failed and we were unable to recover it. 00:34:40.470 [2024-07-26 23:04:32.664453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.470 [2024-07-26 23:04:32.664482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.470 qpair failed and we were unable to recover it. 00:34:40.470 [2024-07-26 23:04:32.664648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.470 [2024-07-26 23:04:32.664675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.470 qpair failed and we were unable to recover it. 00:34:40.470 [2024-07-26 23:04:32.664890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.470 [2024-07-26 23:04:32.664918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.470 qpair failed and we were unable to recover it. 00:34:40.470 [2024-07-26 23:04:32.665094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.470 [2024-07-26 23:04:32.665126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.471 qpair failed and we were unable to recover it. 00:34:40.471 [2024-07-26 23:04:32.665275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.471 [2024-07-26 23:04:32.665300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.471 qpair failed and we were unable to recover it. 00:34:40.471 [2024-07-26 23:04:32.665470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.471 [2024-07-26 23:04:32.665495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.471 qpair failed and we were unable to recover it. 00:34:40.471 [2024-07-26 23:04:32.665679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.471 [2024-07-26 23:04:32.665707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.471 qpair failed and we were unable to recover it. 00:34:40.471 [2024-07-26 23:04:32.665902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.471 [2024-07-26 23:04:32.665927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.471 qpair failed and we were unable to recover it. 00:34:40.471 [2024-07-26 23:04:32.666104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.471 [2024-07-26 23:04:32.666129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.471 qpair failed and we were unable to recover it. 00:34:40.471 [2024-07-26 23:04:32.666298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.471 [2024-07-26 23:04:32.666323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.471 qpair failed and we were unable to recover it. 00:34:40.471 [2024-07-26 23:04:32.666508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.471 [2024-07-26 23:04:32.666535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.471 qpair failed and we were unable to recover it. 00:34:40.471 [2024-07-26 23:04:32.666758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.471 [2024-07-26 23:04:32.666783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.471 qpair failed and we were unable to recover it. 00:34:40.471 [2024-07-26 23:04:32.666947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.471 [2024-07-26 23:04:32.666975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.471 qpair failed and we were unable to recover it. 00:34:40.471 [2024-07-26 23:04:32.667169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.471 [2024-07-26 23:04:32.667197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.471 qpair failed and we were unable to recover it. 00:34:40.471 [2024-07-26 23:04:32.667377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.471 [2024-07-26 23:04:32.667405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.471 qpair failed and we were unable to recover it. 00:34:40.471 [2024-07-26 23:04:32.667592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.471 [2024-07-26 23:04:32.667621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.471 qpair failed and we were unable to recover it. 00:34:40.471 [2024-07-26 23:04:32.667816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.471 [2024-07-26 23:04:32.667846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.471 qpair failed and we were unable to recover it. 00:34:40.471 [2024-07-26 23:04:32.668038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.471 [2024-07-26 23:04:32.668073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.471 qpair failed and we were unable to recover it. 00:34:40.471 [2024-07-26 23:04:32.668237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.471 [2024-07-26 23:04:32.668265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.471 qpair failed and we were unable to recover it. 00:34:40.471 [2024-07-26 23:04:32.668465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.471 [2024-07-26 23:04:32.668490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.471 qpair failed and we were unable to recover it. 00:34:40.471 [2024-07-26 23:04:32.668705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.471 [2024-07-26 23:04:32.668734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.471 qpair failed and we were unable to recover it. 00:34:40.471 [2024-07-26 23:04:32.668948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.471 [2024-07-26 23:04:32.668976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.471 qpair failed and we were unable to recover it. 00:34:40.471 [2024-07-26 23:04:32.669144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.471 [2024-07-26 23:04:32.669172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.471 qpair failed and we were unable to recover it. 00:34:40.471 [2024-07-26 23:04:32.669357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.471 [2024-07-26 23:04:32.669382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.471 qpair failed and we were unable to recover it. 00:34:40.471 [2024-07-26 23:04:32.669577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.471 [2024-07-26 23:04:32.669607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.471 qpair failed and we were unable to recover it. 00:34:40.471 [2024-07-26 23:04:32.669819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.471 [2024-07-26 23:04:32.669847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.471 qpair failed and we were unable to recover it. 00:34:40.471 [2024-07-26 23:04:32.670041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.471 [2024-07-26 23:04:32.670072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.471 qpair failed and we were unable to recover it. 00:34:40.471 [2024-07-26 23:04:32.670213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.471 [2024-07-26 23:04:32.670238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.471 qpair failed and we were unable to recover it. 00:34:40.471 [2024-07-26 23:04:32.670453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.471 [2024-07-26 23:04:32.670481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.471 qpair failed and we were unable to recover it. 00:34:40.471 [2024-07-26 23:04:32.670642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.471 [2024-07-26 23:04:32.670670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.471 qpair failed and we were unable to recover it. 00:34:40.472 [2024-07-26 23:04:32.670822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.472 [2024-07-26 23:04:32.670850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.472 qpair failed and we were unable to recover it. 00:34:40.472 [2024-07-26 23:04:32.671068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.472 [2024-07-26 23:04:32.671111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.472 qpair failed and we were unable to recover it. 00:34:40.472 [2024-07-26 23:04:32.671259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.472 [2024-07-26 23:04:32.671284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.472 qpair failed and we were unable to recover it. 00:34:40.472 [2024-07-26 23:04:32.671475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.472 [2024-07-26 23:04:32.671503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.472 qpair failed and we were unable to recover it. 00:34:40.472 [2024-07-26 23:04:32.671671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.472 [2024-07-26 23:04:32.671698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.472 qpair failed and we were unable to recover it. 00:34:40.472 [2024-07-26 23:04:32.671845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.472 [2024-07-26 23:04:32.671870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.472 qpair failed and we were unable to recover it. 00:34:40.472 [2024-07-26 23:04:32.672066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.472 [2024-07-26 23:04:32.672094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.472 qpair failed and we were unable to recover it. 00:34:40.472 [2024-07-26 23:04:32.672276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.472 [2024-07-26 23:04:32.672304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.472 qpair failed and we were unable to recover it. 00:34:40.472 [2024-07-26 23:04:32.672524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.472 [2024-07-26 23:04:32.672552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.472 qpair failed and we were unable to recover it. 00:34:40.472 [2024-07-26 23:04:32.672745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.472 [2024-07-26 23:04:32.672771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.472 qpair failed and we were unable to recover it. 00:34:40.472 [2024-07-26 23:04:32.672934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.472 [2024-07-26 23:04:32.672961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.472 qpair failed and we were unable to recover it. 00:34:40.472 [2024-07-26 23:04:32.673162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.472 [2024-07-26 23:04:32.673191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.472 qpair failed and we were unable to recover it. 00:34:40.472 [2024-07-26 23:04:32.673381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.472 [2024-07-26 23:04:32.673416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.472 qpair failed and we were unable to recover it. 00:34:40.472 [2024-07-26 23:04:32.673605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.472 [2024-07-26 23:04:32.673632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.472 qpair failed and we were unable to recover it. 00:34:40.472 [2024-07-26 23:04:32.673827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.472 [2024-07-26 23:04:32.673855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.472 qpair failed and we were unable to recover it. 00:34:40.472 [2024-07-26 23:04:32.674027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.472 [2024-07-26 23:04:32.674052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.472 qpair failed and we were unable to recover it. 00:34:40.472 [2024-07-26 23:04:32.674219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.472 [2024-07-26 23:04:32.674244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.472 qpair failed and we were unable to recover it. 00:34:40.472 [2024-07-26 23:04:32.674392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.472 [2024-07-26 23:04:32.674417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.472 qpair failed and we were unable to recover it. 00:34:40.472 [2024-07-26 23:04:32.674582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.472 [2024-07-26 23:04:32.674610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.472 qpair failed and we were unable to recover it. 00:34:40.472 [2024-07-26 23:04:32.674778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.472 [2024-07-26 23:04:32.674806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.472 qpair failed and we were unable to recover it. 00:34:40.472 [2024-07-26 23:04:32.674993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.472 [2024-07-26 23:04:32.675018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.472 qpair failed and we were unable to recover it. 00:34:40.472 [2024-07-26 23:04:32.675220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.472 [2024-07-26 23:04:32.675246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.472 qpair failed and we were unable to recover it. 00:34:40.472 [2024-07-26 23:04:32.675489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.472 [2024-07-26 23:04:32.675539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.472 qpair failed and we were unable to recover it. 00:34:40.472 [2024-07-26 23:04:32.675806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.472 [2024-07-26 23:04:32.675834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.472 qpair failed and we were unable to recover it. 00:34:40.472 [2024-07-26 23:04:32.676022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.472 [2024-07-26 23:04:32.676050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.472 qpair failed and we were unable to recover it. 00:34:40.472 [2024-07-26 23:04:32.676247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.472 [2024-07-26 23:04:32.676272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.472 qpair failed and we were unable to recover it. 00:34:40.473 [2024-07-26 23:04:32.676595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.473 [2024-07-26 23:04:32.676648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.473 qpair failed and we were unable to recover it. 00:34:40.473 [2024-07-26 23:04:32.676988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.473 [2024-07-26 23:04:32.677037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.473 qpair failed and we were unable to recover it. 00:34:40.473 [2024-07-26 23:04:32.677242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.473 [2024-07-26 23:04:32.677267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.473 qpair failed and we were unable to recover it. 00:34:40.473 [2024-07-26 23:04:32.677416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.473 [2024-07-26 23:04:32.677441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.473 qpair failed and we were unable to recover it. 00:34:40.473 [2024-07-26 23:04:32.677636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.473 [2024-07-26 23:04:32.677661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.473 qpair failed and we were unable to recover it. 00:34:40.473 [2024-07-26 23:04:32.677960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.473 [2024-07-26 23:04:32.678018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.473 qpair failed and we were unable to recover it. 00:34:40.473 [2024-07-26 23:04:32.678210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.473 [2024-07-26 23:04:32.678236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.473 qpair failed and we were unable to recover it. 00:34:40.473 [2024-07-26 23:04:32.678408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.473 [2024-07-26 23:04:32.678433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.473 qpair failed and we were unable to recover it. 00:34:40.473 [2024-07-26 23:04:32.678696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.473 [2024-07-26 23:04:32.678721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.473 qpair failed and we were unable to recover it. 00:34:40.473 [2024-07-26 23:04:32.678942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.473 [2024-07-26 23:04:32.678970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.473 qpair failed and we were unable to recover it. 00:34:40.473 [2024-07-26 23:04:32.679164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.473 [2024-07-26 23:04:32.679190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.473 qpair failed and we were unable to recover it. 00:34:40.473 [2024-07-26 23:04:32.679329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.473 [2024-07-26 23:04:32.679354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.473 qpair failed and we were unable to recover it. 00:34:40.473 [2024-07-26 23:04:32.679532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.473 [2024-07-26 23:04:32.679558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.473 qpair failed and we were unable to recover it. 00:34:40.473 [2024-07-26 23:04:32.679717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.473 [2024-07-26 23:04:32.679745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.473 qpair failed and we were unable to recover it. 00:34:40.473 [2024-07-26 23:04:32.679930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.473 [2024-07-26 23:04:32.679958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.473 qpair failed and we were unable to recover it. 00:34:40.473 [2024-07-26 23:04:32.680149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.473 [2024-07-26 23:04:32.680175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.473 qpair failed and we were unable to recover it. 00:34:40.473 [2024-07-26 23:04:32.680400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.473 [2024-07-26 23:04:32.680429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.473 qpair failed and we were unable to recover it. 00:34:40.473 [2024-07-26 23:04:32.680641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.473 [2024-07-26 23:04:32.680668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.473 qpair failed and we were unable to recover it. 00:34:40.473 [2024-07-26 23:04:32.680861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.473 [2024-07-26 23:04:32.680889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.473 qpair failed and we were unable to recover it. 00:34:40.473 [2024-07-26 23:04:32.681071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.473 [2024-07-26 23:04:32.681096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.473 qpair failed and we were unable to recover it. 00:34:40.474 [2024-07-26 23:04:32.681294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.474 [2024-07-26 23:04:32.681322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.474 qpair failed and we were unable to recover it. 00:34:40.474 [2024-07-26 23:04:32.681538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.474 [2024-07-26 23:04:32.681565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.474 qpair failed and we were unable to recover it. 00:34:40.474 [2024-07-26 23:04:32.681728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.474 [2024-07-26 23:04:32.681753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.474 qpair failed and we were unable to recover it. 00:34:40.474 [2024-07-26 23:04:32.681926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.474 [2024-07-26 23:04:32.681951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.474 qpair failed and we were unable to recover it. 00:34:40.474 [2024-07-26 23:04:32.682182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.474 [2024-07-26 23:04:32.682208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.474 qpair failed and we were unable to recover it. 00:34:40.474 [2024-07-26 23:04:32.682376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.474 [2024-07-26 23:04:32.682401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.474 qpair failed and we were unable to recover it. 00:34:40.474 [2024-07-26 23:04:32.682545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.474 [2024-07-26 23:04:32.682570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.474 qpair failed and we were unable to recover it. 00:34:40.474 [2024-07-26 23:04:32.682747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.474 [2024-07-26 23:04:32.682773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.474 qpair failed and we were unable to recover it. 00:34:40.474 [2024-07-26 23:04:32.682941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.474 [2024-07-26 23:04:32.682968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.474 qpair failed and we were unable to recover it. 00:34:40.474 [2024-07-26 23:04:32.683179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.474 [2024-07-26 23:04:32.683208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.474 qpair failed and we were unable to recover it. 00:34:40.474 [2024-07-26 23:04:32.683402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.474 [2024-07-26 23:04:32.683427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.474 qpair failed and we were unable to recover it. 00:34:40.474 [2024-07-26 23:04:32.683574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.474 [2024-07-26 23:04:32.683598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.474 qpair failed and we were unable to recover it. 00:34:40.474 [2024-07-26 23:04:32.683818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.474 [2024-07-26 23:04:32.683846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.474 qpair failed and we were unable to recover it. 00:34:40.474 [2024-07-26 23:04:32.684013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.474 [2024-07-26 23:04:32.684043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.474 qpair failed and we were unable to recover it. 00:34:40.474 [2024-07-26 23:04:32.684204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.474 [2024-07-26 23:04:32.684230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.474 qpair failed and we were unable to recover it. 00:34:40.474 [2024-07-26 23:04:32.684373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.474 [2024-07-26 23:04:32.684398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.474 qpair failed and we were unable to recover it. 00:34:40.474 [2024-07-26 23:04:32.684575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.474 [2024-07-26 23:04:32.684600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.474 qpair failed and we were unable to recover it. 00:34:40.474 [2024-07-26 23:04:32.684806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.474 [2024-07-26 23:04:32.684831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.474 qpair failed and we were unable to recover it. 00:34:40.474 [2024-07-26 23:04:32.685026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.474 [2024-07-26 23:04:32.685051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.474 qpair failed and we were unable to recover it. 00:34:40.474 [2024-07-26 23:04:32.685264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.474 [2024-07-26 23:04:32.685289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.474 qpair failed and we were unable to recover it. 00:34:40.474 [2024-07-26 23:04:32.685440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.474 [2024-07-26 23:04:32.685465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.474 qpair failed and we were unable to recover it. 00:34:40.474 [2024-07-26 23:04:32.685616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.474 [2024-07-26 23:04:32.685641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.474 qpair failed and we were unable to recover it. 00:34:40.474 [2024-07-26 23:04:32.685822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.474 [2024-07-26 23:04:32.685850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.474 qpair failed and we were unable to recover it. 00:34:40.474 [2024-07-26 23:04:32.686099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.474 [2024-07-26 23:04:32.686125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.474 qpair failed and we were unable to recover it. 00:34:40.474 [2024-07-26 23:04:32.686301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.474 [2024-07-26 23:04:32.686344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.474 qpair failed and we were unable to recover it. 00:34:40.474 [2024-07-26 23:04:32.686559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.474 [2024-07-26 23:04:32.686587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.474 qpair failed and we were unable to recover it. 00:34:40.475 [2024-07-26 23:04:32.686774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.475 [2024-07-26 23:04:32.686802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.475 qpair failed and we were unable to recover it. 00:34:40.475 [2024-07-26 23:04:32.686975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.475 [2024-07-26 23:04:32.687001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.475 qpair failed and we were unable to recover it. 00:34:40.475 [2024-07-26 23:04:32.687149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.475 [2024-07-26 23:04:32.687175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.475 qpair failed and we were unable to recover it. 00:34:40.475 [2024-07-26 23:04:32.687338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.475 [2024-07-26 23:04:32.687363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.475 qpair failed and we were unable to recover it. 00:34:40.475 [2024-07-26 23:04:32.687546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.475 [2024-07-26 23:04:32.687574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.475 qpair failed and we were unable to recover it. 00:34:40.475 [2024-07-26 23:04:32.687743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.475 [2024-07-26 23:04:32.687767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.475 qpair failed and we were unable to recover it. 00:34:40.475 [2024-07-26 23:04:32.687961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.475 [2024-07-26 23:04:32.687989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.475 qpair failed and we were unable to recover it. 00:34:40.475 [2024-07-26 23:04:32.688181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.475 [2024-07-26 23:04:32.688207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.475 qpair failed and we were unable to recover it. 00:34:40.475 [2024-07-26 23:04:32.688424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.475 [2024-07-26 23:04:32.688456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.475 qpair failed and we were unable to recover it. 00:34:40.475 [2024-07-26 23:04:32.688681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.475 [2024-07-26 23:04:32.688706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.475 qpair failed and we were unable to recover it. 00:34:40.475 [2024-07-26 23:04:32.688945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.475 [2024-07-26 23:04:32.688970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.475 qpair failed and we were unable to recover it. 00:34:40.475 [2024-07-26 23:04:32.689144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.475 [2024-07-26 23:04:32.689170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.475 qpair failed and we were unable to recover it. 00:34:40.475 [2024-07-26 23:04:32.689341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.475 [2024-07-26 23:04:32.689369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.475 qpair failed and we were unable to recover it. 00:34:40.475 [2024-07-26 23:04:32.689558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.475 [2024-07-26 23:04:32.689583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.475 qpair failed and we were unable to recover it. 00:34:40.475 [2024-07-26 23:04:32.689771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.475 [2024-07-26 23:04:32.689799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.475 qpair failed and we were unable to recover it. 00:34:40.475 [2024-07-26 23:04:32.690017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.475 [2024-07-26 23:04:32.690045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.475 qpair failed and we were unable to recover it. 00:34:40.475 [2024-07-26 23:04:32.690268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.475 [2024-07-26 23:04:32.690293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.475 qpair failed and we were unable to recover it. 00:34:40.475 [2024-07-26 23:04:32.690462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.475 [2024-07-26 23:04:32.690487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.475 qpair failed and we were unable to recover it. 00:34:40.475 [2024-07-26 23:04:32.690629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.475 [2024-07-26 23:04:32.690654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.475 qpair failed and we were unable to recover it. 00:34:40.475 [2024-07-26 23:04:32.690819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.475 [2024-07-26 23:04:32.690844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.475 qpair failed and we were unable to recover it. 00:34:40.475 [2024-07-26 23:04:32.690980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.475 [2024-07-26 23:04:32.691005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.475 qpair failed and we were unable to recover it. 00:34:40.475 [2024-07-26 23:04:32.691175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.475 [2024-07-26 23:04:32.691202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.475 qpair failed and we were unable to recover it. 00:34:40.475 [2024-07-26 23:04:32.691378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.475 [2024-07-26 23:04:32.691403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.475 qpair failed and we were unable to recover it. 00:34:40.475 [2024-07-26 23:04:32.691609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.475 [2024-07-26 23:04:32.691634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.475 qpair failed and we were unable to recover it. 00:34:40.475 [2024-07-26 23:04:32.691804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.475 [2024-07-26 23:04:32.691829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.475 qpair failed and we were unable to recover it. 00:34:40.475 [2024-07-26 23:04:32.691977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.475 [2024-07-26 23:04:32.692001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.475 qpair failed and we were unable to recover it. 00:34:40.475 [2024-07-26 23:04:32.692175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.475 [2024-07-26 23:04:32.692201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.475 qpair failed and we were unable to recover it. 00:34:40.475 [2024-07-26 23:04:32.692373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.475 [2024-07-26 23:04:32.692398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.475 qpair failed and we were unable to recover it. 00:34:40.475 [2024-07-26 23:04:32.692592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.475 [2024-07-26 23:04:32.692617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.475 qpair failed and we were unable to recover it. 00:34:40.475 [2024-07-26 23:04:32.692797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.475 [2024-07-26 23:04:32.692822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.475 qpair failed and we were unable to recover it. 00:34:40.475 [2024-07-26 23:04:32.693041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.475 [2024-07-26 23:04:32.693076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.475 qpair failed and we were unable to recover it. 00:34:40.475 [2024-07-26 23:04:32.693260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.475 [2024-07-26 23:04:32.693288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.475 qpair failed and we were unable to recover it. 00:34:40.475 [2024-07-26 23:04:32.693472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.475 [2024-07-26 23:04:32.693500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.475 qpair failed and we were unable to recover it. 00:34:40.475 [2024-07-26 23:04:32.693671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.475 [2024-07-26 23:04:32.693696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.693867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.693892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.694123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.694152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.694292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.694317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.694525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.694550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.694709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.694737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.694899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.694924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.695097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.695123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.695269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.695294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.695455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.695483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.695694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.695720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.695944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.695972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.696141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.696167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.696315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.696340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.696478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.696519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.696730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.696755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.696976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.697004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.697173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.697199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.697394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.697422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.697602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.697630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.697819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.697844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.698069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.698098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.698254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.698281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.698473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.698498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.698671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.698696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.698967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.699015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.699223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.699248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.699445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.699473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.699659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.699684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.699908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.699937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.700105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.700134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.700358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.700383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.700570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.700595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.700790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.700818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.701009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.701037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.701262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.701287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.701439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.701464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.701604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.701631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.476 [2024-07-26 23:04:32.701823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.476 [2024-07-26 23:04:32.701852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.476 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.702039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.702073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.702260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.702285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.702475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.702504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.702718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.702746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.702904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.702932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.703129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.703155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.703320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.703346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.703548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.703576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.703773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.703801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.704019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.704043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.704223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.704249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.704386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.704410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.704621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.704649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.704851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.704876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.705113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.705139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.705313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.705338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.705525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.705553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.705716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.705741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.705976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.706004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.706199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.706225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.706361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.706403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.706590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.706615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.706809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.706837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.707032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.707066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.707260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.707285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.707448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.707473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.707633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.707661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.707827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.707855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.708018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.708048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.708239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.708264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.708462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.708491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.708650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.708682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.708839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.708869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.709071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.709097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.709315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.709343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.709501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.709529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.709683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.709711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.477 [2024-07-26 23:04:32.709870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.477 [2024-07-26 23:04:32.709895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.477 qpair failed and we were unable to recover it. 00:34:40.478 [2024-07-26 23:04:32.710084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.478 [2024-07-26 23:04:32.710113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.478 qpair failed and we were unable to recover it. 00:34:40.478 [2024-07-26 23:04:32.710298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.478 [2024-07-26 23:04:32.710325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.478 qpair failed and we were unable to recover it. 00:34:40.478 [2024-07-26 23:04:32.710516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.478 [2024-07-26 23:04:32.710540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.478 qpair failed and we were unable to recover it. 00:34:40.478 [2024-07-26 23:04:32.710714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.478 [2024-07-26 23:04:32.710739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.478 qpair failed and we were unable to recover it. 00:34:40.478 [2024-07-26 23:04:32.710936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.478 [2024-07-26 23:04:32.710963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.478 qpair failed and we were unable to recover it. 00:34:40.478 [2024-07-26 23:04:32.711124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.478 [2024-07-26 23:04:32.711152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.478 qpair failed and we were unable to recover it. 00:34:40.478 [2024-07-26 23:04:32.711315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.478 [2024-07-26 23:04:32.711343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.478 qpair failed and we were unable to recover it. 00:34:40.478 [2024-07-26 23:04:32.711540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.478 [2024-07-26 23:04:32.711565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.478 qpair failed and we were unable to recover it. 00:34:40.478 [2024-07-26 23:04:32.711727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.478 [2024-07-26 23:04:32.711755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.478 qpair failed and we were unable to recover it. 00:34:40.478 [2024-07-26 23:04:32.711970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.478 [2024-07-26 23:04:32.711998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.478 qpair failed and we were unable to recover it. 00:34:40.478 [2024-07-26 23:04:32.712167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.478 [2024-07-26 23:04:32.712193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.478 qpair failed and we were unable to recover it. 00:34:40.478 [2024-07-26 23:04:32.712341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.478 [2024-07-26 23:04:32.712366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.478 qpair failed and we were unable to recover it. 00:34:40.478 [2024-07-26 23:04:32.712502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.478 [2024-07-26 23:04:32.712528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.478 qpair failed and we were unable to recover it. 00:34:40.478 [2024-07-26 23:04:32.712725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.478 [2024-07-26 23:04:32.712750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.478 qpair failed and we were unable to recover it. 00:34:40.478 [2024-07-26 23:04:32.712915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.478 [2024-07-26 23:04:32.712942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.478 qpair failed and we were unable to recover it. 00:34:40.478 [2024-07-26 23:04:32.713133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.478 [2024-07-26 23:04:32.713159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.478 qpair failed and we were unable to recover it. 00:34:40.478 [2024-07-26 23:04:32.713326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.478 [2024-07-26 23:04:32.713352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.478 qpair failed and we were unable to recover it. 00:34:40.478 [2024-07-26 23:04:32.713517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.478 [2024-07-26 23:04:32.713542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.478 qpair failed and we were unable to recover it. 00:34:40.478 [2024-07-26 23:04:32.713707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.478 [2024-07-26 23:04:32.713734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.478 qpair failed and we were unable to recover it. 00:34:40.478 [2024-07-26 23:04:32.713908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.478 [2024-07-26 23:04:32.713935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.478 qpair failed and we were unable to recover it. 00:34:40.478 [2024-07-26 23:04:32.714154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.478 [2024-07-26 23:04:32.714184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.478 qpair failed and we were unable to recover it. 00:34:40.478 [2024-07-26 23:04:32.714374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.478 [2024-07-26 23:04:32.714401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.478 qpair failed and we were unable to recover it. 00:34:40.478 [2024-07-26 23:04:32.714565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.478 [2024-07-26 23:04:32.714592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.478 qpair failed and we were unable to recover it. 00:34:40.478 [2024-07-26 23:04:32.714754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.478 [2024-07-26 23:04:32.714779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.478 qpair failed and we were unable to recover it. 00:34:40.478 [2024-07-26 23:04:32.714947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.478 [2024-07-26 23:04:32.714976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.478 qpair failed and we were unable to recover it. 00:34:40.478 [2024-07-26 23:04:32.715168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.478 [2024-07-26 23:04:32.715198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.478 qpair failed and we were unable to recover it. 00:34:40.478 [2024-07-26 23:04:32.715395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.478 [2024-07-26 23:04:32.715420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.478 qpair failed and we were unable to recover it. 00:34:40.478 [2024-07-26 23:04:32.715586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.478 [2024-07-26 23:04:32.715611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.478 qpair failed and we were unable to recover it. 00:34:40.478 [2024-07-26 23:04:32.715803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.478 [2024-07-26 23:04:32.715831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.478 qpair failed and we were unable to recover it. 00:34:40.478 [2024-07-26 23:04:32.716016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.478 [2024-07-26 23:04:32.716044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.478 qpair failed and we were unable to recover it. 00:34:40.478 [2024-07-26 23:04:32.716253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.478 [2024-07-26 23:04:32.716279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.478 qpair failed and we were unable to recover it. 00:34:40.478 [2024-07-26 23:04:32.716452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.478 [2024-07-26 23:04:32.716477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.716697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.479 [2024-07-26 23:04:32.716747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.716941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.479 [2024-07-26 23:04:32.716967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.717171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.479 [2024-07-26 23:04:32.717200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.717397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.479 [2024-07-26 23:04:32.717422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.717684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.479 [2024-07-26 23:04:32.717736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.717924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.479 [2024-07-26 23:04:32.717952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.718138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.479 [2024-07-26 23:04:32.718167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.718362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.479 [2024-07-26 23:04:32.718386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.718584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.479 [2024-07-26 23:04:32.718612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.718823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.479 [2024-07-26 23:04:32.718851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.719015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.479 [2024-07-26 23:04:32.719043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.719246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.479 [2024-07-26 23:04:32.719272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.719468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.479 [2024-07-26 23:04:32.719496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.719675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.479 [2024-07-26 23:04:32.719703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.719902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.479 [2024-07-26 23:04:32.719930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.720149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.479 [2024-07-26 23:04:32.720178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.720474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.479 [2024-07-26 23:04:32.720526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.720713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.479 [2024-07-26 23:04:32.720741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.720907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.479 [2024-07-26 23:04:32.720935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.721149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.479 [2024-07-26 23:04:32.721175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.721377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.479 [2024-07-26 23:04:32.721405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.721585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.479 [2024-07-26 23:04:32.721613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.721812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.479 [2024-07-26 23:04:32.721840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.722010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.479 [2024-07-26 23:04:32.722034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.722212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.479 [2024-07-26 23:04:32.722238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.722434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.479 [2024-07-26 23:04:32.722462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.722646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.479 [2024-07-26 23:04:32.722673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.722842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.479 [2024-07-26 23:04:32.722867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.723034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.479 [2024-07-26 23:04:32.723077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.723256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.479 [2024-07-26 23:04:32.723282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.723441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.479 [2024-07-26 23:04:32.723469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.723632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.479 [2024-07-26 23:04:32.723658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.723818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.479 [2024-07-26 23:04:32.723846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.724068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.479 [2024-07-26 23:04:32.724111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.724256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.479 [2024-07-26 23:04:32.724281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.724450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.479 [2024-07-26 23:04:32.724475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.724673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.479 [2024-07-26 23:04:32.724701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.479 qpair failed and we were unable to recover it. 00:34:40.479 [2024-07-26 23:04:32.724865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.724895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.725047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.725083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.725275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.725301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.725498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.725526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.725711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.725739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.725950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.725979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.726175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.726201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.726364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.726392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.726577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.726604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.726764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.726793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.727008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.727033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.727217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.727242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.727411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.727440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.727629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.727657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.727822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.727847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.728016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.728042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.728244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.728269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.728496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.728524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.728698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.728723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.728885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.728914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.729193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.729242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.729404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.729429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.729600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.729625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.729845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.729873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.730055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.730087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.730314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.730342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.730529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.730554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.730698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.730723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.730918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.730943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.731117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.731143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.731339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.731364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.731514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.731539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.731693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.731736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.731938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.731966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.732162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.732188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.732336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.732361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.732533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.732562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.732735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.480 [2024-07-26 23:04:32.732763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.480 qpair failed and we were unable to recover it. 00:34:40.480 [2024-07-26 23:04:32.732951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.732976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.481 [2024-07-26 23:04:32.733125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.733151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.481 [2024-07-26 23:04:32.733336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.733364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.481 [2024-07-26 23:04:32.733518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.733545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.481 [2024-07-26 23:04:32.733765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.733790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.481 [2024-07-26 23:04:32.733988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.734017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.481 [2024-07-26 23:04:32.734217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.734243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.481 [2024-07-26 23:04:32.734397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.734425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.481 [2024-07-26 23:04:32.734576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.734604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.481 [2024-07-26 23:04:32.734750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.734792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.481 [2024-07-26 23:04:32.735007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.735035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.481 [2024-07-26 23:04:32.735242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.735267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.481 [2024-07-26 23:04:32.735413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.735438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.481 [2024-07-26 23:04:32.735631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.735659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.481 [2024-07-26 23:04:32.735850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.735875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.481 [2024-07-26 23:04:32.736022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.736070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.481 [2024-07-26 23:04:32.736281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.736307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.481 [2024-07-26 23:04:32.736502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.736530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.481 [2024-07-26 23:04:32.736683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.736711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.481 [2024-07-26 23:04:32.736906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.736934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.481 [2024-07-26 23:04:32.737105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.737131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.481 [2024-07-26 23:04:32.737281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.737307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.481 [2024-07-26 23:04:32.737453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.737479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.481 [2024-07-26 23:04:32.737649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.737674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.481 [2024-07-26 23:04:32.737862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.737888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.481 [2024-07-26 23:04:32.738049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.738083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.481 [2024-07-26 23:04:32.738251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.738277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.481 [2024-07-26 23:04:32.738443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.738468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.481 [2024-07-26 23:04:32.738635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.738660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.481 [2024-07-26 23:04:32.738830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.738856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.481 [2024-07-26 23:04:32.739081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.739110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.481 [2024-07-26 23:04:32.739296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.739324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.481 [2024-07-26 23:04:32.739517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.739542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.481 [2024-07-26 23:04:32.739690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.739715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.481 [2024-07-26 23:04:32.739880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.739905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.481 [2024-07-26 23:04:32.740120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.740153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.481 [2024-07-26 23:04:32.740316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.740341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.481 [2024-07-26 23:04:32.740531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.481 [2024-07-26 23:04:32.740560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.481 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.740781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.482 [2024-07-26 23:04:32.740809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.482 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.740971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.482 [2024-07-26 23:04:32.741000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.482 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.741172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.482 [2024-07-26 23:04:32.741198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.482 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.741373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.482 [2024-07-26 23:04:32.741399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.482 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.741569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.482 [2024-07-26 23:04:32.741596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.482 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.741781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.482 [2024-07-26 23:04:32.741808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.482 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.742001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.482 [2024-07-26 23:04:32.742026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.482 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.742205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.482 [2024-07-26 23:04:32.742230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.482 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.742420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.482 [2024-07-26 23:04:32.742450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.482 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.742669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.482 [2024-07-26 23:04:32.742702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.482 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.742919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.482 [2024-07-26 23:04:32.742945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.482 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.743152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.482 [2024-07-26 23:04:32.743179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.482 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.743346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.482 [2024-07-26 23:04:32.743375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.482 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.743543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.482 [2024-07-26 23:04:32.743571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.482 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.743737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.482 [2024-07-26 23:04:32.743762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.482 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.743955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.482 [2024-07-26 23:04:32.743982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.482 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.744177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.482 [2024-07-26 23:04:32.744203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.482 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.744348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.482 [2024-07-26 23:04:32.744373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.482 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.744575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.482 [2024-07-26 23:04:32.744600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.482 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.744832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.482 [2024-07-26 23:04:32.744860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.482 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.745049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.482 [2024-07-26 23:04:32.745095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.482 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.745277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.482 [2024-07-26 23:04:32.745302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.482 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.745496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.482 [2024-07-26 23:04:32.745522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.482 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.745672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.482 [2024-07-26 23:04:32.745697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.482 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.745885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.482 [2024-07-26 23:04:32.745912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.482 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.746106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.482 [2024-07-26 23:04:32.746135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.482 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.746298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.482 [2024-07-26 23:04:32.746323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.482 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.746485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.482 [2024-07-26 23:04:32.746512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.482 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.746699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.482 [2024-07-26 23:04:32.746727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.482 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.746981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.482 [2024-07-26 23:04:32.747005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.482 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.747166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.482 [2024-07-26 23:04:32.747191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.482 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.747383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.482 [2024-07-26 23:04:32.747411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.482 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.747612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.482 [2024-07-26 23:04:32.747636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.482 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.747791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.482 [2024-07-26 23:04:32.747819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.482 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.748012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.482 [2024-07-26 23:04:32.748038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.482 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.748244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.482 [2024-07-26 23:04:32.748270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.482 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.748462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.482 [2024-07-26 23:04:32.748491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.482 qpair failed and we were unable to recover it. 00:34:40.482 [2024-07-26 23:04:32.748680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.483 [2024-07-26 23:04:32.748708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.483 qpair failed and we were unable to recover it. 00:34:40.483 [2024-07-26 23:04:32.748963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.483 [2024-07-26 23:04:32.748988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.483 qpair failed and we were unable to recover it. 00:34:40.483 [2024-07-26 23:04:32.749186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.483 [2024-07-26 23:04:32.749215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.483 qpair failed and we were unable to recover it. 00:34:40.483 [2024-07-26 23:04:32.749399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.483 [2024-07-26 23:04:32.749428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.483 qpair failed and we were unable to recover it. 00:34:40.483 [2024-07-26 23:04:32.749684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.483 [2024-07-26 23:04:32.749712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.483 qpair failed and we were unable to recover it. 00:34:40.483 [2024-07-26 23:04:32.749928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.483 [2024-07-26 23:04:32.749953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.483 qpair failed and we were unable to recover it. 00:34:40.483 [2024-07-26 23:04:32.750103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.483 [2024-07-26 23:04:32.750129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.483 qpair failed and we were unable to recover it. 00:34:40.483 [2024-07-26 23:04:32.750323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.483 [2024-07-26 23:04:32.750348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.483 qpair failed and we were unable to recover it. 00:34:40.483 [2024-07-26 23:04:32.750566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.483 [2024-07-26 23:04:32.750593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.483 qpair failed and we were unable to recover it. 00:34:40.483 [2024-07-26 23:04:32.750754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.483 [2024-07-26 23:04:32.750780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.483 qpair failed and we were unable to recover it. 00:34:40.483 [2024-07-26 23:04:32.750975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.483 [2024-07-26 23:04:32.751003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.483 qpair failed and we were unable to recover it. 00:34:40.483 [2024-07-26 23:04:32.751190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.483 [2024-07-26 23:04:32.751216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.483 qpair failed and we were unable to recover it. 00:34:40.483 [2024-07-26 23:04:32.751380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.483 [2024-07-26 23:04:32.751422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.483 qpair failed and we were unable to recover it. 00:34:40.483 [2024-07-26 23:04:32.751607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.483 [2024-07-26 23:04:32.751632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.483 qpair failed and we were unable to recover it. 00:34:40.483 [2024-07-26 23:04:32.751771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.483 [2024-07-26 23:04:32.751814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.483 qpair failed and we were unable to recover it. 00:34:40.483 [2024-07-26 23:04:32.752001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.483 [2024-07-26 23:04:32.752029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.483 qpair failed and we were unable to recover it. 00:34:40.483 [2024-07-26 23:04:32.752205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.483 [2024-07-26 23:04:32.752230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.483 qpair failed and we were unable to recover it. 00:34:40.483 [2024-07-26 23:04:32.752443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.483 [2024-07-26 23:04:32.752468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.483 qpair failed and we were unable to recover it. 00:34:40.483 [2024-07-26 23:04:32.752725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.483 [2024-07-26 23:04:32.752753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.483 qpair failed and we were unable to recover it. 00:34:40.483 [2024-07-26 23:04:32.752946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.483 [2024-07-26 23:04:32.752971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.483 qpair failed and we were unable to recover it. 00:34:40.483 [2024-07-26 23:04:32.753187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.483 [2024-07-26 23:04:32.753215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.483 qpair failed and we were unable to recover it. 00:34:40.483 [2024-07-26 23:04:32.753415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.483 [2024-07-26 23:04:32.753440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.483 qpair failed and we were unable to recover it. 00:34:40.483 [2024-07-26 23:04:32.753625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.483 [2024-07-26 23:04:32.753651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.483 qpair failed and we were unable to recover it. 00:34:40.483 [2024-07-26 23:04:32.753840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.483 [2024-07-26 23:04:32.753867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.483 qpair failed and we were unable to recover it. 00:34:40.483 [2024-07-26 23:04:32.754033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.483 [2024-07-26 23:04:32.754069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.483 qpair failed and we were unable to recover it. 00:34:40.483 [2024-07-26 23:04:32.754282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.483 [2024-07-26 23:04:32.754307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.483 qpair failed and we were unable to recover it. 00:34:40.483 [2024-07-26 23:04:32.754507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.483 [2024-07-26 23:04:32.754535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.483 qpair failed and we were unable to recover it. 00:34:40.483 [2024-07-26 23:04:32.754697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.483 [2024-07-26 23:04:32.754725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.483 qpair failed and we were unable to recover it. 00:34:40.483 [2024-07-26 23:04:32.754916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.483 [2024-07-26 23:04:32.754948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.483 qpair failed and we were unable to recover it. 00:34:40.483 [2024-07-26 23:04:32.755141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.483 [2024-07-26 23:04:32.755166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.483 qpair failed and we were unable to recover it. 00:34:40.483 [2024-07-26 23:04:32.755337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.483 [2024-07-26 23:04:32.755362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.483 qpair failed and we were unable to recover it. 00:34:40.483 [2024-07-26 23:04:32.755545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.483 [2024-07-26 23:04:32.755573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.483 qpair failed and we were unable to recover it. 00:34:40.483 [2024-07-26 23:04:32.755736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.483 [2024-07-26 23:04:32.755764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.483 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.755930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.484 [2024-07-26 23:04:32.755955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.484 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.756143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.484 [2024-07-26 23:04:32.756169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.484 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.756356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.484 [2024-07-26 23:04:32.756384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.484 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.756573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.484 [2024-07-26 23:04:32.756598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.484 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.756794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.484 [2024-07-26 23:04:32.756819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.484 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.757065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.484 [2024-07-26 23:04:32.757091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.484 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.757269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.484 [2024-07-26 23:04:32.757294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.484 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.757441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.484 [2024-07-26 23:04:32.757466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.484 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.757662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.484 [2024-07-26 23:04:32.757687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.484 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.757861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.484 [2024-07-26 23:04:32.757889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.484 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.758088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.484 [2024-07-26 23:04:32.758116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.484 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.758302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.484 [2024-07-26 23:04:32.758330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.484 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.758527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.484 [2024-07-26 23:04:32.758551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.484 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.758720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.484 [2024-07-26 23:04:32.758750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.484 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.758932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.484 [2024-07-26 23:04:32.758960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.484 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.759128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.484 [2024-07-26 23:04:32.759157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.484 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.759348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.484 [2024-07-26 23:04:32.759374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.484 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.759542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.484 [2024-07-26 23:04:32.759570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.484 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.759782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.484 [2024-07-26 23:04:32.759810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.484 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.759993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.484 [2024-07-26 23:04:32.760020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.484 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.760229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.484 [2024-07-26 23:04:32.760255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.484 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.760421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.484 [2024-07-26 23:04:32.760449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.484 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.760662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.484 [2024-07-26 23:04:32.760697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.484 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.760862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.484 [2024-07-26 23:04:32.760889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.484 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.761072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.484 [2024-07-26 23:04:32.761098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.484 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.761249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.484 [2024-07-26 23:04:32.761275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.484 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.761454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.484 [2024-07-26 23:04:32.761480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.484 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.761709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.484 [2024-07-26 23:04:32.761734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.484 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.761903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.484 [2024-07-26 23:04:32.761928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.484 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.762103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.484 [2024-07-26 23:04:32.762128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.484 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.762321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.484 [2024-07-26 23:04:32.762349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.484 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.762564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.484 [2024-07-26 23:04:32.762591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.484 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.762782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.484 [2024-07-26 23:04:32.762806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.484 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.763027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.484 [2024-07-26 23:04:32.763055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.484 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.763266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.484 [2024-07-26 23:04:32.763291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.484 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.763459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.484 [2024-07-26 23:04:32.763486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.484 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.763678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.484 [2024-07-26 23:04:32.763704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.484 qpair failed and we were unable to recover it. 00:34:40.484 [2024-07-26 23:04:32.763943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.763995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.764174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.764200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.764368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.764393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.764560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.764585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.764782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.764810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.765078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.765119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.765293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.765318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.765500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.765524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.765710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.765739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.765954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.765979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.766195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.766223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.766419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.766445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.766607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.766640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.766828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.766856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.767042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.767078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.767238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.767263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.767459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.767487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.767672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.767700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.767889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.767919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.768088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.768114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.768280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.768306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.768473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.768498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.768686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.768714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.768884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.768908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.769080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.769106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.769323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.769351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.769510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.769538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.769691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.769716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.769937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.769965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.770129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.770159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.770380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.770408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.770634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.770659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.770835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.770861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.771031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.771056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.771242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.771267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.771410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.771435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.771607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.771633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.771851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.485 [2024-07-26 23:04:32.771878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.485 qpair failed and we were unable to recover it. 00:34:40.485 [2024-07-26 23:04:32.772071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.772114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.772288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.772313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.772483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.772511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.772701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.772725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.772910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.772937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.773124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.773150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.773370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.773398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.773560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.773588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.773777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.773802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.773995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.774019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.774209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.774235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.774379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.774404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.774629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.774657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.774850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.774875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.775099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.775129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.775313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.775341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.775533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.775561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.775747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.775773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.775970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.775996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.776230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.776256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.776446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.776473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.776641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.776668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.776843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.776869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.777056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.777101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.777261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.777289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.777506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.777532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.777752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.777780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.777975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.778000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.778205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.778231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.778412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.778437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.778639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.778664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.778831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.778861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.779024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.779052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.779249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.779275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.779479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.779508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.779699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.779724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.779892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.779918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.780064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.486 [2024-07-26 23:04:32.780090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.486 qpair failed and we were unable to recover it. 00:34:40.486 [2024-07-26 23:04:32.780235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.487 [2024-07-26 23:04:32.780261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.487 qpair failed and we were unable to recover it. 00:34:40.487 [2024-07-26 23:04:32.780457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.487 [2024-07-26 23:04:32.780485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.487 qpair failed and we were unable to recover it. 00:34:40.487 [2024-07-26 23:04:32.780647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.487 [2024-07-26 23:04:32.780688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.487 qpair failed and we were unable to recover it. 00:34:40.487 [2024-07-26 23:04:32.780877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.487 [2024-07-26 23:04:32.780905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.487 qpair failed and we were unable to recover it. 00:34:40.487 [2024-07-26 23:04:32.781139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.487 [2024-07-26 23:04:32.781169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.487 qpair failed and we were unable to recover it. 00:34:40.487 [2024-07-26 23:04:32.781357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.487 [2024-07-26 23:04:32.781386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.487 qpair failed and we were unable to recover it. 00:34:40.487 [2024-07-26 23:04:32.781568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.487 [2024-07-26 23:04:32.781596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.487 qpair failed and we were unable to recover it. 00:34:40.487 [2024-07-26 23:04:32.781849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.487 [2024-07-26 23:04:32.781875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.487 qpair failed and we were unable to recover it. 00:34:40.487 [2024-07-26 23:04:32.782100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.487 [2024-07-26 23:04:32.782129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.487 qpair failed and we were unable to recover it. 00:34:40.487 [2024-07-26 23:04:32.782291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.487 [2024-07-26 23:04:32.782319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.487 qpair failed and we were unable to recover it. 00:34:40.487 [2024-07-26 23:04:32.782508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.487 [2024-07-26 23:04:32.782536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.487 qpair failed and we were unable to recover it. 00:34:40.487 [2024-07-26 23:04:32.782792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.487 [2024-07-26 23:04:32.782818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.487 qpair failed and we were unable to recover it. 00:34:40.487 [2024-07-26 23:04:32.783042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.487 [2024-07-26 23:04:32.783075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.487 qpair failed and we were unable to recover it. 00:34:40.487 [2024-07-26 23:04:32.783250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.487 [2024-07-26 23:04:32.783275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.487 qpair failed and we were unable to recover it. 00:34:40.487 [2024-07-26 23:04:32.783427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.487 [2024-07-26 23:04:32.783453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.487 qpair failed and we were unable to recover it. 00:34:40.487 [2024-07-26 23:04:32.783622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.487 [2024-07-26 23:04:32.783647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.487 qpair failed and we were unable to recover it. 00:34:40.487 [2024-07-26 23:04:32.783820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.487 [2024-07-26 23:04:32.783849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.487 qpair failed and we were unable to recover it. 00:34:40.487 [2024-07-26 23:04:32.784038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.487 [2024-07-26 23:04:32.784075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.487 qpair failed and we were unable to recover it. 00:34:40.487 [2024-07-26 23:04:32.784283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.487 [2024-07-26 23:04:32.784308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.487 qpair failed and we were unable to recover it. 00:34:40.487 [2024-07-26 23:04:32.784456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.487 [2024-07-26 23:04:32.784482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.487 qpair failed and we were unable to recover it. 00:34:40.487 [2024-07-26 23:04:32.784652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.487 [2024-07-26 23:04:32.784680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.487 qpair failed and we were unable to recover it. 00:34:40.487 [2024-07-26 23:04:32.784892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.487 [2024-07-26 23:04:32.784920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.487 qpair failed and we were unable to recover it. 00:34:40.487 [2024-07-26 23:04:32.785100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.487 [2024-07-26 23:04:32.785129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.487 qpair failed and we were unable to recover it. 00:34:40.487 [2024-07-26 23:04:32.785294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.487 [2024-07-26 23:04:32.785319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.487 qpair failed and we were unable to recover it. 00:34:40.487 [2024-07-26 23:04:32.785486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.487 [2024-07-26 23:04:32.785514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.487 qpair failed and we were unable to recover it. 00:34:40.487 [2024-07-26 23:04:32.785725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.487 [2024-07-26 23:04:32.785754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.487 qpair failed and we were unable to recover it. 00:34:40.487 [2024-07-26 23:04:32.785954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.487 [2024-07-26 23:04:32.785982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.487 qpair failed and we were unable to recover it. 00:34:40.487 [2024-07-26 23:04:32.786167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.487 [2024-07-26 23:04:32.786194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.487 qpair failed and we were unable to recover it. 00:34:40.487 [2024-07-26 23:04:32.786366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.487 [2024-07-26 23:04:32.786392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.487 qpair failed and we were unable to recover it. 00:34:40.487 [2024-07-26 23:04:32.786646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.487 [2024-07-26 23:04:32.786674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.487 qpair failed and we were unable to recover it. 00:34:40.487 [2024-07-26 23:04:32.786836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.487 [2024-07-26 23:04:32.786861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.487 qpair failed and we were unable to recover it. 00:34:40.487 [2024-07-26 23:04:32.787032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.487 [2024-07-26 23:04:32.787068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.487 qpair failed and we were unable to recover it. 00:34:40.487 [2024-07-26 23:04:32.787247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.487 [2024-07-26 23:04:32.787272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.487 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.787506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.787532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.488 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.787730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.787754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.488 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.787965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.787991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.488 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.788154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.788180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.488 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.788320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.788361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.488 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.788530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.788557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.488 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.788782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.788807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.488 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.789035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.789069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.488 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.789261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.789286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.488 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.789430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.789455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.488 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.789615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.789640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.488 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.789826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.789854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.488 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.790037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.790079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.488 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.790279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.790304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.488 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.790467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.790492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.488 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.790804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.790868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.488 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.791136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.791165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.488 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.791323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.791351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.488 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.791519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.791544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.488 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.791736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.791765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.488 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.791959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.791984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.488 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.792123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.792148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.488 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.792321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.792346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.488 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.792527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.792552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.488 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.792781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.792806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.488 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.792942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.792967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.488 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.793169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.793194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.488 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.793383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.793412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.488 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.793572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.793600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.488 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.793783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.793810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.488 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.793997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.794022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.488 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.794206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.794232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.488 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.794386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.794414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.488 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.794624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.794652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.488 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.794836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.794861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.488 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.795055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.795092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.488 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.795354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.795397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.488 qpair failed and we were unable to recover it. 00:34:40.488 [2024-07-26 23:04:32.795585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.488 [2024-07-26 23:04:32.795613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.795779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.795804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.795986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.796011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.796177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.796203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.796396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.796426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.796592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.796617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.796790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.796815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.796978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.797006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.797228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.797254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.797396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.797420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.797615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.797643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.797838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.797866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.798044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.798087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.798302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.798327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.798503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.798528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.798744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.798773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.798962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.798990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.799182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.799207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.799392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.799420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.799632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.799659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.799843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.799879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.800040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.800078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.800270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.800300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.800471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.800500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.800691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.800719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.800893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.800919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.801116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.801146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.801316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.801345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.801536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.801564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.801759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.801789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.801952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.801982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.802179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.802206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.802398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.802426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.802613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.802638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.802833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.802861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.803100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.803126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.803314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.803342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.803529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.803555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.489 [2024-07-26 23:04:32.803720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.489 [2024-07-26 23:04:32.803748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.489 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.803931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.803959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.804150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.804180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.804342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.804367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.804560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.804588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.804807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.804836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.805024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.805053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.805259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.805285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.805449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.805477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.805663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.805692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.805850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.805880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.806085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.806111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.806309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.806339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.806551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.806579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.806743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.806768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.806964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.806989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.807159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.807185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.807455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.807483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.807699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.807731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.807955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.807981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.808161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.808187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.808350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.808392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.808584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.808612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.808778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.808807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.809000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.809029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.809233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.809259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.809480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.809508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.809715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.809740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.809916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.809941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.810144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.810170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.810320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.810345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.810544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.810569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.810797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.810826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.811009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.811037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.811225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.811251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.811396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.811422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.811636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.811664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.811821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.811849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.490 [2024-07-26 23:04:32.812001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.490 [2024-07-26 23:04:32.812029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.490 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.812227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.812253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.812448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.812476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.812668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.812695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.812879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.812907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.813085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.813111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.813284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.813309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.813469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.813498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.813640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.813666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.813839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.813864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.814045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.814083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.814269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.814297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.814487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.814516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.814690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.814716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.814859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.814884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.815074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.815103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.815278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.815306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.815500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.815525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.815661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.815687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.815907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.815935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.816091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.816120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.816282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.816309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.816475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.816504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.816687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.816715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.816877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.816904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.817099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.817126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.817348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.817376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.817544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.817572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.817763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.817791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.817978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.818004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.818149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.818179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.818350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.818376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.818556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.818581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.818768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.818794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.818960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.818986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.819138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.819164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.819335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.819360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.819537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.819563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.491 [2024-07-26 23:04:32.819738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.491 [2024-07-26 23:04:32.819764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.491 qpair failed and we were unable to recover it. 00:34:40.492 [2024-07-26 23:04:32.819934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.492 [2024-07-26 23:04:32.819959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.492 qpair failed and we were unable to recover it. 00:34:40.492 [2024-07-26 23:04:32.820108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.492 [2024-07-26 23:04:32.820134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.492 qpair failed and we were unable to recover it. 00:34:40.492 [2024-07-26 23:04:32.820334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.492 [2024-07-26 23:04:32.820360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.492 qpair failed and we were unable to recover it. 00:34:40.492 [2024-07-26 23:04:32.820534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.492 [2024-07-26 23:04:32.820560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.492 qpair failed and we were unable to recover it. 00:34:40.492 [2024-07-26 23:04:32.820755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.492 [2024-07-26 23:04:32.820780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.492 qpair failed and we were unable to recover it. 00:34:40.492 [2024-07-26 23:04:32.820938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.492 [2024-07-26 23:04:32.820963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.492 qpair failed and we were unable to recover it. 00:34:40.492 [2024-07-26 23:04:32.821115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.492 [2024-07-26 23:04:32.821142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.492 qpair failed and we were unable to recover it. 00:34:40.492 [2024-07-26 23:04:32.821311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.492 [2024-07-26 23:04:32.821337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.492 qpair failed and we were unable to recover it. 00:34:40.492 [2024-07-26 23:04:32.821510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.492 [2024-07-26 23:04:32.821535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.492 qpair failed and we were unable to recover it. 00:34:40.492 [2024-07-26 23:04:32.821729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.492 [2024-07-26 23:04:32.821754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.492 qpair failed and we were unable to recover it. 00:34:40.492 [2024-07-26 23:04:32.821944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.492 [2024-07-26 23:04:32.821969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.492 qpair failed and we were unable to recover it. 00:34:40.492 [2024-07-26 23:04:32.822119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.492 [2024-07-26 23:04:32.822145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.492 qpair failed and we were unable to recover it. 00:34:40.492 [2024-07-26 23:04:32.822289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.492 [2024-07-26 23:04:32.822314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.492 qpair failed and we were unable to recover it. 00:34:40.492 [2024-07-26 23:04:32.822447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.492 [2024-07-26 23:04:32.822472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.492 qpair failed and we were unable to recover it. 00:34:40.492 [2024-07-26 23:04:32.822670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.492 [2024-07-26 23:04:32.822695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.492 qpair failed and we were unable to recover it. 00:34:40.492 [2024-07-26 23:04:32.822918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.492 [2024-07-26 23:04:32.822946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.492 qpair failed and we were unable to recover it. 00:34:40.492 [2024-07-26 23:04:32.823144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.492 [2024-07-26 23:04:32.823173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.492 qpair failed and we were unable to recover it. 00:34:40.492 [2024-07-26 23:04:32.823362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.492 [2024-07-26 23:04:32.823390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.492 qpair failed and we were unable to recover it. 00:34:40.492 [2024-07-26 23:04:32.823575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.492 [2024-07-26 23:04:32.823600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.492 qpair failed and we were unable to recover it. 00:34:40.492 [2024-07-26 23:04:32.823796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.492 [2024-07-26 23:04:32.823828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.492 qpair failed and we were unable to recover it. 00:34:40.492 [2024-07-26 23:04:32.824044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.492 [2024-07-26 23:04:32.824078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.492 qpair failed and we were unable to recover it. 00:34:40.492 [2024-07-26 23:04:32.824232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.492 [2024-07-26 23:04:32.824258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.492 qpair failed and we were unable to recover it. 00:34:40.492 [2024-07-26 23:04:32.824429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.492 [2024-07-26 23:04:32.824454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.492 qpair failed and we were unable to recover it. 00:34:40.492 [2024-07-26 23:04:32.824637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.492 [2024-07-26 23:04:32.824663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.492 qpair failed and we were unable to recover it. 00:34:40.492 [2024-07-26 23:04:32.824880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.492 [2024-07-26 23:04:32.824908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.492 qpair failed and we were unable to recover it. 00:34:40.492 [2024-07-26 23:04:32.825106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.492 [2024-07-26 23:04:32.825131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.492 qpair failed and we were unable to recover it. 00:34:40.492 [2024-07-26 23:04:32.825302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.492 [2024-07-26 23:04:32.825327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.492 qpair failed and we were unable to recover it. 00:34:40.492 [2024-07-26 23:04:32.825518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.492 [2024-07-26 23:04:32.825547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.492 qpair failed and we were unable to recover it. 00:34:40.492 [2024-07-26 23:04:32.825732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.492 [2024-07-26 23:04:32.825757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.492 qpair failed and we were unable to recover it. 00:34:40.492 [2024-07-26 23:04:32.825923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.492 [2024-07-26 23:04:32.825948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.492 qpair failed and we were unable to recover it. 00:34:40.492 [2024-07-26 23:04:32.826153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.826180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.826375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.826403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.826614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.826642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.826832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.826860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.827045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.827079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.827242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.827271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.827448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.827480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.827647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.827675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.827835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.827860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.828049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.828086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.828304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.828329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.828546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.828574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.828766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.828791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.828972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.828997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.829153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.829179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.829398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.829426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.829649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.829673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.829826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.829851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.829993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.830019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.830196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.830221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.830393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.830420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.830616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.830645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.830833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.830862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.831047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.831083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.831298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.831323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.831522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.831551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.831747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.831775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.831969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.831994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.832163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.832189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.832412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.832440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.832632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.832660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.832867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.832895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.833077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.833103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.833296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.833330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.833513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.833541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.833731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.833759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.833981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.834006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.834160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.493 [2024-07-26 23:04:32.834186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.493 qpair failed and we were unable to recover it. 00:34:40.493 [2024-07-26 23:04:32.834326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.834351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.834581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.834609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.834782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.834807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.834980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.835005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.835160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.835187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.835377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.835406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.835597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.835622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.835821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.835851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.836035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.836068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.836263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.836289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.836463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.836488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.836688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.836716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.836927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.836952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.837146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.837172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.837304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.837329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.837626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.837680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.837844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.837872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.838053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.838102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.838320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.838345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.838530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.838558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.838736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.838764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.838958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.838983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.839131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.839156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.839377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.839405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.839569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.839596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.839789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.839817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.840018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.840043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.840222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.840247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.840411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.840439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.840625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.840653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.840840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.840865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.841040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.841072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.841244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.841270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.841472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.841500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.841690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.841716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.841990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.842040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.842247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.842273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.842418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.494 [2024-07-26 23:04:32.842444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.494 qpair failed and we were unable to recover it. 00:34:40.494 [2024-07-26 23:04:32.842612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.842636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.495 [2024-07-26 23:04:32.842816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.842844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.495 [2024-07-26 23:04:32.843008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.843036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.495 [2024-07-26 23:04:32.843244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.843269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.495 [2024-07-26 23:04:32.843468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.843493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.495 [2024-07-26 23:04:32.843672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.843697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.495 [2024-07-26 23:04:32.843911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.843939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.495 [2024-07-26 23:04:32.844083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.844126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.495 [2024-07-26 23:04:32.844303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.844327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.495 [2024-07-26 23:04:32.844527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.844553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.495 [2024-07-26 23:04:32.844782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.844807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.495 [2024-07-26 23:04:32.844958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.844983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.495 [2024-07-26 23:04:32.845163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.845190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.495 [2024-07-26 23:04:32.845388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.845417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.495 [2024-07-26 23:04:32.845609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.845637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.495 [2024-07-26 23:04:32.845819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.845847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.495 [2024-07-26 23:04:32.846040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.846074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.495 [2024-07-26 23:04:32.846261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.846286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.495 [2024-07-26 23:04:32.846426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.846451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.495 [2024-07-26 23:04:32.846653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.846682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.495 [2024-07-26 23:04:32.846848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.846873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.495 [2024-07-26 23:04:32.847070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.847099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.495 [2024-07-26 23:04:32.847286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.847315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.495 [2024-07-26 23:04:32.847498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.847523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.495 [2024-07-26 23:04:32.847730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.847755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.495 [2024-07-26 23:04:32.847954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.847989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.495 [2024-07-26 23:04:32.848171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.848196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.495 [2024-07-26 23:04:32.848396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.848424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.495 [2024-07-26 23:04:32.848618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.848643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.495 [2024-07-26 23:04:32.848847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.848875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.495 [2024-07-26 23:04:32.849099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.849125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.495 [2024-07-26 23:04:32.849311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.849338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.495 [2024-07-26 23:04:32.849523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.849548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.495 [2024-07-26 23:04:32.849775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.849804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.495 [2024-07-26 23:04:32.849971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.849999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.495 [2024-07-26 23:04:32.850184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.850209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.495 [2024-07-26 23:04:32.850370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.850396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.495 [2024-07-26 23:04:32.850589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.495 [2024-07-26 23:04:32.850617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.495 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.850801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.850829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.851027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.851055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.851263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.851289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.851509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.851537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.851740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.851765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.851933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.851958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.852096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.852122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.852265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.852291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.852509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.852537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.852727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.852755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.852947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.852973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.853125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.853151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.853320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.853365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.853577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.853605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.853777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.853805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.853979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.854005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.854195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.854221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.854384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.854411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.854564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.854590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.854774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.854803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.854962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.854990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.855159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.855186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.855349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.855374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.855553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.855579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.855768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.855796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.855983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.856012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.856226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.856252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.856422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.856448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.856647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.856676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.856842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.856870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.857031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.857057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.857238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.857264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.857437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.857501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.857682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.857710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.857933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.857958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.858175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.858201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.858413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.858441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.496 [2024-07-26 23:04:32.858661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.496 [2024-07-26 23:04:32.858690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.496 qpair failed and we were unable to recover it. 00:34:40.497 [2024-07-26 23:04:32.858851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.497 [2024-07-26 23:04:32.858877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.497 qpair failed and we were unable to recover it. 00:34:40.497 [2024-07-26 23:04:32.859045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.497 [2024-07-26 23:04:32.859097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.497 qpair failed and we were unable to recover it. 00:34:40.497 [2024-07-26 23:04:32.859295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.497 [2024-07-26 23:04:32.859324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.497 qpair failed and we were unable to recover it. 00:34:40.497 [2024-07-26 23:04:32.859517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.497 [2024-07-26 23:04:32.859546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.497 qpair failed and we were unable to recover it. 00:34:40.497 [2024-07-26 23:04:32.859743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.497 [2024-07-26 23:04:32.859768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.497 qpair failed and we were unable to recover it. 00:34:40.497 [2024-07-26 23:04:32.859938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.497 [2024-07-26 23:04:32.859967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.497 qpair failed and we were unable to recover it. 00:34:40.497 [2024-07-26 23:04:32.860181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.497 [2024-07-26 23:04:32.860210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.497 qpair failed and we were unable to recover it. 00:34:40.497 [2024-07-26 23:04:32.860393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.497 [2024-07-26 23:04:32.860421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.497 qpair failed and we were unable to recover it. 00:34:40.497 [2024-07-26 23:04:32.860617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.497 [2024-07-26 23:04:32.860642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.497 qpair failed and we were unable to recover it. 00:34:40.497 [2024-07-26 23:04:32.860863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.497 [2024-07-26 23:04:32.860891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.497 qpair failed and we were unable to recover it. 00:34:40.497 [2024-07-26 23:04:32.861050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.497 [2024-07-26 23:04:32.861101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.497 qpair failed and we were unable to recover it. 00:34:40.497 [2024-07-26 23:04:32.861294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.497 [2024-07-26 23:04:32.861319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.497 qpair failed and we were unable to recover it. 00:34:40.497 [2024-07-26 23:04:32.861462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.497 [2024-07-26 23:04:32.861487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.497 qpair failed and we were unable to recover it. 00:34:40.497 [2024-07-26 23:04:32.861688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.497 [2024-07-26 23:04:32.861716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.497 qpair failed and we were unable to recover it. 00:34:40.497 [2024-07-26 23:04:32.861930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.497 [2024-07-26 23:04:32.861955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.497 qpair failed and we were unable to recover it. 00:34:40.497 [2024-07-26 23:04:32.862119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.497 [2024-07-26 23:04:32.862145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.497 qpair failed and we were unable to recover it. 00:34:40.497 [2024-07-26 23:04:32.862310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.497 [2024-07-26 23:04:32.862335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.497 qpair failed and we were unable to recover it. 00:34:40.497 [2024-07-26 23:04:32.862531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.497 [2024-07-26 23:04:32.862561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.497 qpair failed and we were unable to recover it. 00:34:40.497 [2024-07-26 23:04:32.862750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.497 [2024-07-26 23:04:32.862779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.497 qpair failed and we were unable to recover it. 00:34:40.497 [2024-07-26 23:04:32.862949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.497 [2024-07-26 23:04:32.862976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.497 qpair failed and we were unable to recover it. 00:34:40.497 [2024-07-26 23:04:32.863126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.497 [2024-07-26 23:04:32.863152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.497 qpair failed and we were unable to recover it. 00:34:40.497 [2024-07-26 23:04:32.863324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.497 [2024-07-26 23:04:32.863350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.497 qpair failed and we were unable to recover it. 00:34:40.497 [2024-07-26 23:04:32.863539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.497 [2024-07-26 23:04:32.863567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.497 qpair failed and we were unable to recover it. 00:34:40.497 [2024-07-26 23:04:32.863728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.497 [2024-07-26 23:04:32.863756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.497 qpair failed and we were unable to recover it. 00:34:40.497 [2024-07-26 23:04:32.863917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.497 [2024-07-26 23:04:32.863942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.497 qpair failed and we were unable to recover it. 00:34:40.497 [2024-07-26 23:04:32.864086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.497 [2024-07-26 23:04:32.864130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.497 qpair failed and we were unable to recover it. 00:34:40.497 [2024-07-26 23:04:32.864295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.497 [2024-07-26 23:04:32.864324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.497 qpair failed and we were unable to recover it. 00:34:40.497 [2024-07-26 23:04:32.864507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.497 [2024-07-26 23:04:32.864535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.497 qpair failed and we were unable to recover it. 00:34:40.497 [2024-07-26 23:04:32.864713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.497 [2024-07-26 23:04:32.864739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.497 qpair failed and we were unable to recover it. 00:34:40.497 [2024-07-26 23:04:32.864924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.497 [2024-07-26 23:04:32.864953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.497 qpair failed and we were unable to recover it. 00:34:40.497 [2024-07-26 23:04:32.865100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.497 [2024-07-26 23:04:32.865129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.497 qpair failed and we were unable to recover it. 00:34:40.497 [2024-07-26 23:04:32.865289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.497 [2024-07-26 23:04:32.865318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.497 qpair failed and we were unable to recover it. 00:34:40.497 [2024-07-26 23:04:32.865506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.497 [2024-07-26 23:04:32.865531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.865718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.498 [2024-07-26 23:04:32.865769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.865989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.498 [2024-07-26 23:04:32.866017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.866218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.498 [2024-07-26 23:04:32.866244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.866418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.498 [2024-07-26 23:04:32.866443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.866619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.498 [2024-07-26 23:04:32.866648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.866829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.498 [2024-07-26 23:04:32.866856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.867043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.498 [2024-07-26 23:04:32.867080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.867306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.498 [2024-07-26 23:04:32.867331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.867533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.498 [2024-07-26 23:04:32.867558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.867748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.498 [2024-07-26 23:04:32.867776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.867935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.498 [2024-07-26 23:04:32.867963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.868157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.498 [2024-07-26 23:04:32.868184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.868375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.498 [2024-07-26 23:04:32.868403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.868596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.498 [2024-07-26 23:04:32.868624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.868821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.498 [2024-07-26 23:04:32.868847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.869015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.498 [2024-07-26 23:04:32.869040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.869211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.498 [2024-07-26 23:04:32.869238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.869490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.498 [2024-07-26 23:04:32.869538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.869721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.498 [2024-07-26 23:04:32.869749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.869950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.498 [2024-07-26 23:04:32.869975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.870149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.498 [2024-07-26 23:04:32.870176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.870327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.498 [2024-07-26 23:04:32.870352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.870523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.498 [2024-07-26 23:04:32.870547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.870743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.498 [2024-07-26 23:04:32.870768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.870942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.498 [2024-07-26 23:04:32.870967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.871172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.498 [2024-07-26 23:04:32.871200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.871414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.498 [2024-07-26 23:04:32.871442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.871608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.498 [2024-07-26 23:04:32.871633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.871794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.498 [2024-07-26 23:04:32.871822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.872005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.498 [2024-07-26 23:04:32.872033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.872226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.498 [2024-07-26 23:04:32.872252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.872394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.498 [2024-07-26 23:04:32.872419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.872634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.498 [2024-07-26 23:04:32.872662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.872878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.498 [2024-07-26 23:04:32.872906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.873085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.498 [2024-07-26 23:04:32.873115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.873268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.498 [2024-07-26 23:04:32.873293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.873496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.498 [2024-07-26 23:04:32.873524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.498 qpair failed and we were unable to recover it. 00:34:40.498 [2024-07-26 23:04:32.873708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.873736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.873923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.873956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.874133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.874159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.874374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.874403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.874585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.874613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.874770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.874798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.874991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.875016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.875199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.875225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.875389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.875420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.875582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.875610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.875797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.875822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.875981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.876010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.876186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.876212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.876404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.876432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.876590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.876614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.876786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.876812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.877003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.877031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.877203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.877229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.877403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.877429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.877625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.877653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.877843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.877868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.878007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.878032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.878235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.878262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.878432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.878461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.878715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.878767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.878978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.879006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.879173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.879198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.879394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.879422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.879616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.879648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.879805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.879833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.880022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.880049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.880208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.880233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.880380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.880421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.880590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.880615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.880809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.880834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.880978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.881003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.881170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.881195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.881382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.499 [2024-07-26 23:04:32.881410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.499 qpair failed and we were unable to recover it. 00:34:40.499 [2024-07-26 23:04:32.881600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.881625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-26 23:04:32.881951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.882008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-26 23:04:32.882214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.882240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-26 23:04:32.882399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.882426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-26 23:04:32.882621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.882646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-26 23:04:32.882864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.882893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-26 23:04:32.883123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.883149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-26 23:04:32.883297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.883322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-26 23:04:32.883465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.883490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-26 23:04:32.883708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.883737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-26 23:04:32.883912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.883939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-26 23:04:32.884125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.884154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-26 23:04:32.884341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.884366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-26 23:04:32.884508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.884533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-26 23:04:32.884697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.884740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-26 23:04:32.884902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.884930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-26 23:04:32.885119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.885144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-26 23:04:32.885341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.885370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-26 23:04:32.885532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.885561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-26 23:04:32.885727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.885756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-26 23:04:32.885921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.885948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-26 23:04:32.886133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.886162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-26 23:04:32.886358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.886384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-26 23:04:32.886567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.886596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-26 23:04:32.886785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.886810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-26 23:04:32.887028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.887056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-26 23:04:32.887262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.887288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-26 23:04:32.887463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.887488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-26 23:04:32.887661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.887687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-26 23:04:32.887889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.887918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-26 23:04:32.888103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.888132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-26 23:04:32.888286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.888314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-26 23:04:32.888508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.888533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-26 23:04:32.888686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.888712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-26 23:04:32.888854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.888880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-26 23:04:32.889046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.889079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-26 23:04:32.889241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.889265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-26 23:04:32.889483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.500 [2024-07-26 23:04:32.889512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-26 23:04:32.889706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.501 [2024-07-26 23:04:32.889731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-26 23:04:32.889925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.501 [2024-07-26 23:04:32.889950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-26 23:04:32.890130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.501 [2024-07-26 23:04:32.890156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-26 23:04:32.890327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.501 [2024-07-26 23:04:32.890352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-26 23:04:32.890524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.501 [2024-07-26 23:04:32.890549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-26 23:04:32.890707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.501 [2024-07-26 23:04:32.890732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-26 23:04:32.890925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.501 [2024-07-26 23:04:32.890951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-26 23:04:32.891151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.501 [2024-07-26 23:04:32.891181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-26 23:04:32.891358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.501 [2024-07-26 23:04:32.891385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-26 23:04:32.891566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.501 [2024-07-26 23:04:32.891594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-26 23:04:32.891751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.501 [2024-07-26 23:04:32.891776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-26 23:04:32.891947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.501 [2024-07-26 23:04:32.891972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-26 23:04:32.892161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.501 [2024-07-26 23:04:32.892187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-26 23:04:32.892366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.501 [2024-07-26 23:04:32.892394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-26 23:04:32.892575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.501 [2024-07-26 23:04:32.892600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-26 23:04:32.892757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.501 [2024-07-26 23:04:32.892782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-26 23:04:32.892974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.501 [2024-07-26 23:04:32.893002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-26 23:04:32.893195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.501 [2024-07-26 23:04:32.893224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-26 23:04:32.893449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.501 [2024-07-26 23:04:32.893474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-26 23:04:32.893696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.501 [2024-07-26 23:04:32.893723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-26 23:04:32.893937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.501 [2024-07-26 23:04:32.893966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-26 23:04:32.894162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.501 [2024-07-26 23:04:32.894190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-26 23:04:32.894379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.501 [2024-07-26 23:04:32.894404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-26 23:04:32.894626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.501 [2024-07-26 23:04:32.894654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-26 23:04:32.894889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.501 [2024-07-26 23:04:32.894914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-26 23:04:32.895087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.501 [2024-07-26 23:04:32.895113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-26 23:04:32.895260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.501 [2024-07-26 23:04:32.895285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-26 23:04:32.895455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.501 [2024-07-26 23:04:32.895480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-26 23:04:32.895673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.501 [2024-07-26 23:04:32.895698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-26 23:04:32.895924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.501 [2024-07-26 23:04:32.895949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-26 23:04:32.896095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.501 [2024-07-26 23:04:32.896120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-26 23:04:32.896290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.501 [2024-07-26 23:04:32.896315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-26 23:04:32.896456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.501 [2024-07-26 23:04:32.896499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.896709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.896737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.896937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.896963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.897160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.897188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.897405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.897433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.897653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.897678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.897846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.897872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.898042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.898083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.898256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.898281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.898411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.898436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.898632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.898657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.898817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.898845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.899043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.899076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.899269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.899297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.899492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.899518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.899697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.899730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.899894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.899923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.900136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.900165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.900360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.900385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.900605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.900633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.900829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.900854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.901004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.901029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.901175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.901201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.901346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.901373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.901523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.901565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.901769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.901798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.901988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.902013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.902204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.902233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.902453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.902478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.902679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.902704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.902929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.902954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.903146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.903174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.903323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.903351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.903561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.903589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.903774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.903799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.903984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.904013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.904195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.904223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.904391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.904419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-26 23:04:32.904617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.502 [2024-07-26 23:04:32.904642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.904788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.904814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.503 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.904985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.905010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.503 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.905197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.905222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.503 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.905395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.905425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.503 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.905588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.905617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.503 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.905831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.905859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.503 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.906023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.906051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.503 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.906267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.906293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.503 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.906485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.906513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.503 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.906667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.906695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.503 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.906851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.906879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.503 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.907047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.907081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.503 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.907225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.907250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.503 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.907395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.907420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.503 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.907630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.907658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.503 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.907876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.907902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.503 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.908039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.908090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.503 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.908253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.908282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.503 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.908471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.908501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.503 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.908684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.908709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.503 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.908900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.908928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.503 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.909121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.909149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.503 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.909313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.909341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.503 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.909515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.909540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.503 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.909729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.909757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.503 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.909977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.910002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.503 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.910166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.910191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.503 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.910365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.910390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.503 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.910586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.910614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.503 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.910829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.910858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.503 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.911043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.911076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.503 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.911280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.911306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.503 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.911500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.911528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.503 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.911716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.911744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.503 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.911898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.911926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.503 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.912124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.912150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.503 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.912318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.912343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.503 qpair failed and we were unable to recover it. 00:34:40.503 [2024-07-26 23:04:32.912486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.503 [2024-07-26 23:04:32.912511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.912744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.912769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.912939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.912965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.913147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.913174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.913364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.913392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.913572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.913600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.913813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.913839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.914047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.914090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.914264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.914290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.914482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.914508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.914697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.914722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.914890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.914915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.915114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.915143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.915298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.915326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.915494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.915520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.915735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.915763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.915952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.915980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.916168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.916197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.916383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.916409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.916582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.916607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.916794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.916822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.916981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.917009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.917208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.917234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.917372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.917398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.917595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.917620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.917787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.917812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.917948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.917973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.918188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.918217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.918388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.918413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.918605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.918633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.918818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.918843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.919020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.919045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.919228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.919254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.919466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.919492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.919685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.919714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.919860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.919885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.920032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.920065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.920289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.920317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.504 qpair failed and we were unable to recover it. 00:34:40.504 [2024-07-26 23:04:32.920476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.504 [2024-07-26 23:04:32.920501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.920672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.920697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.920840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.920866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.921075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.921101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.921297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.921323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.921463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.921489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.921680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.921707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.921914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.921942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.922108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.922134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.922273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.922313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.922536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.922565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.922725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.922753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.922948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.922973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.923144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.923170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.923387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.923415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.923573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.923600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.923782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.923807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.923991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.924018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.924235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.924263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.924461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.924487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.924681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.924706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.924898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.924926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.925108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.925137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.925304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.925337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.925556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.925581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.925767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.925795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.926012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.926040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.926248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.926273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.926438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.926463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.926604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.926629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.926794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.926820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.927027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.927052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.927212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.927237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.927403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.927429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.927576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.927601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.927784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.927812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.928002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.928027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.928207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.928233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.505 qpair failed and we were unable to recover it. 00:34:40.505 [2024-07-26 23:04:32.928403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.505 [2024-07-26 23:04:32.928431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.506 qpair failed and we were unable to recover it. 00:34:40.506 [2024-07-26 23:04:32.928593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.506 [2024-07-26 23:04:32.928621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.506 qpair failed and we were unable to recover it. 00:34:40.506 [2024-07-26 23:04:32.928815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.506 [2024-07-26 23:04:32.928840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.506 qpair failed and we were unable to recover it. 00:34:40.506 [2024-07-26 23:04:32.929056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.506 [2024-07-26 23:04:32.929104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.506 qpair failed and we were unable to recover it. 00:34:40.506 [2024-07-26 23:04:32.929272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.506 [2024-07-26 23:04:32.929300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.506 qpair failed and we were unable to recover it. 00:34:40.506 [2024-07-26 23:04:32.929497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.506 [2024-07-26 23:04:32.929522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.506 qpair failed and we were unable to recover it. 00:34:40.506 [2024-07-26 23:04:32.929672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.506 [2024-07-26 23:04:32.929697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.506 qpair failed and we were unable to recover it. 00:34:40.506 [2024-07-26 23:04:32.929901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.506 [2024-07-26 23:04:32.929929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.506 qpair failed and we were unable to recover it. 00:34:40.506 [2024-07-26 23:04:32.930151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.506 [2024-07-26 23:04:32.930177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.506 qpair failed and we were unable to recover it. 00:34:40.506 [2024-07-26 23:04:32.930358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.506 [2024-07-26 23:04:32.930383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.506 qpair failed and we were unable to recover it. 00:34:40.506 [2024-07-26 23:04:32.930555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.506 [2024-07-26 23:04:32.930581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.506 qpair failed and we were unable to recover it. 00:34:40.506 [2024-07-26 23:04:32.930775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.506 [2024-07-26 23:04:32.930802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.506 qpair failed and we were unable to recover it. 00:34:40.506 [2024-07-26 23:04:32.930959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.506 [2024-07-26 23:04:32.930987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.506 qpair failed and we were unable to recover it. 00:34:40.506 [2024-07-26 23:04:32.931149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.506 [2024-07-26 23:04:32.931178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.506 qpair failed and we were unable to recover it. 00:34:40.506 [2024-07-26 23:04:32.931343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.506 [2024-07-26 23:04:32.931368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.506 qpair failed and we were unable to recover it. 00:34:40.506 [2024-07-26 23:04:32.931552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.506 [2024-07-26 23:04:32.931580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.506 qpair failed and we were unable to recover it. 00:34:40.506 [2024-07-26 23:04:32.931799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.506 [2024-07-26 23:04:32.931827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.506 qpair failed and we were unable to recover it. 00:34:40.506 [2024-07-26 23:04:32.932014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.506 [2024-07-26 23:04:32.932042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.506 qpair failed and we were unable to recover it. 00:34:40.506 [2024-07-26 23:04:32.932270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.506 [2024-07-26 23:04:32.932296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.506 qpair failed and we were unable to recover it. 00:34:40.506 [2024-07-26 23:04:32.932512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.506 [2024-07-26 23:04:32.932539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.506 qpair failed and we were unable to recover it. 00:34:40.506 [2024-07-26 23:04:32.932717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.506 [2024-07-26 23:04:32.932745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.506 qpair failed and we were unable to recover it. 00:34:40.506 [2024-07-26 23:04:32.932943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.506 [2024-07-26 23:04:32.932968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.506 qpair failed and we were unable to recover it. 00:34:40.506 [2024-07-26 23:04:32.933164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.506 [2024-07-26 23:04:32.933189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.506 qpair failed and we were unable to recover it. 00:34:40.506 [2024-07-26 23:04:32.933410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.506 [2024-07-26 23:04:32.933438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.506 qpair failed and we were unable to recover it. 00:34:40.506 [2024-07-26 23:04:32.933649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.506 [2024-07-26 23:04:32.933677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.506 qpair failed and we were unable to recover it. 00:34:40.506 [2024-07-26 23:04:32.933865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.506 [2024-07-26 23:04:32.933890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.506 qpair failed and we were unable to recover it. 00:34:40.506 [2024-07-26 23:04:32.934077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.506 [2024-07-26 23:04:32.934104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.506 qpair failed and we were unable to recover it. 00:34:40.506 [2024-07-26 23:04:32.934258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.506 [2024-07-26 23:04:32.934286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.506 qpair failed and we were unable to recover it. 00:34:40.506 [2024-07-26 23:04:32.934478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.506 [2024-07-26 23:04:32.934506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.506 qpair failed and we were unable to recover it. 00:34:40.506 [2024-07-26 23:04:32.934651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.506 [2024-07-26 23:04:32.934679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.506 qpair failed and we were unable to recover it. 00:34:40.506 [2024-07-26 23:04:32.934840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.506 [2024-07-26 23:04:32.934865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.506 qpair failed and we were unable to recover it. 00:34:40.506 [2024-07-26 23:04:32.935051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.506 [2024-07-26 23:04:32.935087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.506 qpair failed and we were unable to recover it. 00:34:40.506 [2024-07-26 23:04:32.935270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.506 [2024-07-26 23:04:32.935298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.506 qpair failed and we were unable to recover it. 00:34:40.506 [2024-07-26 23:04:32.935473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.507 [2024-07-26 23:04:32.935499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.507 qpair failed and we were unable to recover it. 00:34:40.507 [2024-07-26 23:04:32.935698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.507 [2024-07-26 23:04:32.935724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.507 qpair failed and we were unable to recover it. 00:34:40.507 [2024-07-26 23:04:32.935942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.507 [2024-07-26 23:04:32.935970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.507 qpair failed and we were unable to recover it. 00:34:40.507 [2024-07-26 23:04:32.936131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.507 [2024-07-26 23:04:32.936159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.507 qpair failed and we were unable to recover it. 00:34:40.507 [2024-07-26 23:04:32.936373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.507 [2024-07-26 23:04:32.936401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.507 qpair failed and we were unable to recover it. 00:34:40.507 [2024-07-26 23:04:32.936600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.507 [2024-07-26 23:04:32.936625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.507 qpair failed and we were unable to recover it. 00:34:40.507 [2024-07-26 23:04:32.936775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.507 [2024-07-26 23:04:32.936800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.507 qpair failed and we were unable to recover it. 00:34:40.507 [2024-07-26 23:04:32.936987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.507 [2024-07-26 23:04:32.937013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.507 qpair failed and we were unable to recover it. 00:34:40.507 [2024-07-26 23:04:32.937225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.507 [2024-07-26 23:04:32.937253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.507 qpair failed and we were unable to recover it. 00:34:40.507 [2024-07-26 23:04:32.937414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.507 [2024-07-26 23:04:32.937441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.507 qpair failed and we were unable to recover it. 00:34:40.507 [2024-07-26 23:04:32.937617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.507 [2024-07-26 23:04:32.937642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.507 qpair failed and we were unable to recover it. 00:34:40.507 [2024-07-26 23:04:32.937802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.507 [2024-07-26 23:04:32.937827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.507 qpair failed and we were unable to recover it. 00:34:40.507 [2024-07-26 23:04:32.937989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.507 [2024-07-26 23:04:32.938017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.507 qpair failed and we were unable to recover it. 00:34:40.507 [2024-07-26 23:04:32.938213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.507 [2024-07-26 23:04:32.938239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.507 qpair failed and we were unable to recover it. 00:34:40.507 [2024-07-26 23:04:32.938430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.507 [2024-07-26 23:04:32.938458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.507 qpair failed and we were unable to recover it. 00:34:40.507 [2024-07-26 23:04:32.938671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.507 [2024-07-26 23:04:32.938700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.507 qpair failed and we were unable to recover it. 00:34:40.507 [2024-07-26 23:04:32.938919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.507 [2024-07-26 23:04:32.938944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.507 qpair failed and we were unable to recover it. 00:34:40.507 [2024-07-26 23:04:32.939114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.507 [2024-07-26 23:04:32.939140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.507 qpair failed and we were unable to recover it. 00:34:40.507 [2024-07-26 23:04:32.939332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.507 [2024-07-26 23:04:32.939360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.507 qpair failed and we were unable to recover it. 00:34:40.507 [2024-07-26 23:04:32.939567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.507 [2024-07-26 23:04:32.939592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.507 qpair failed and we were unable to recover it. 00:34:40.507 [2024-07-26 23:04:32.939818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.779 [2024-07-26 23:04:32.939851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.779 qpair failed and we were unable to recover it. 00:34:40.779 [2024-07-26 23:04:32.940043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.779 [2024-07-26 23:04:32.940078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.779 qpair failed and we were unable to recover it. 00:34:40.779 [2024-07-26 23:04:32.940274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.779 [2024-07-26 23:04:32.940302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.779 qpair failed and we were unable to recover it. 00:34:40.779 [2024-07-26 23:04:32.940512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.779 [2024-07-26 23:04:32.940540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.779 qpair failed and we were unable to recover it. 00:34:40.779 [2024-07-26 23:04:32.940735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.779 [2024-07-26 23:04:32.940763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.779 qpair failed and we were unable to recover it. 00:34:40.779 [2024-07-26 23:04:32.940963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.779 [2024-07-26 23:04:32.940988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.779 qpair failed and we were unable to recover it. 00:34:40.779 [2024-07-26 23:04:32.941161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.779 [2024-07-26 23:04:32.941187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.779 qpair failed and we were unable to recover it. 00:34:40.779 [2024-07-26 23:04:32.941374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.779 [2024-07-26 23:04:32.941403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.779 qpair failed and we were unable to recover it. 00:34:40.779 [2024-07-26 23:04:32.941586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.779 [2024-07-26 23:04:32.941614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.779 qpair failed and we were unable to recover it. 00:34:40.779 [2024-07-26 23:04:32.941785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.779 [2024-07-26 23:04:32.941810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.779 qpair failed and we were unable to recover it. 00:34:40.779 [2024-07-26 23:04:32.941981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.779 [2024-07-26 23:04:32.942006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.779 qpair failed and we were unable to recover it. 00:34:40.779 [2024-07-26 23:04:32.942146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.779 [2024-07-26 23:04:32.942171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.779 qpair failed and we were unable to recover it. 00:34:40.779 [2024-07-26 23:04:32.942339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.779 [2024-07-26 23:04:32.942364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.779 qpair failed and we were unable to recover it. 00:34:40.779 [2024-07-26 23:04:32.942534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.779 [2024-07-26 23:04:32.942560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.779 qpair failed and we were unable to recover it. 00:34:40.779 [2024-07-26 23:04:32.942746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.779 [2024-07-26 23:04:32.942774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.779 qpair failed and we were unable to recover it. 00:34:40.779 [2024-07-26 23:04:32.942960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.779 [2024-07-26 23:04:32.942984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.779 qpair failed and we were unable to recover it. 00:34:40.779 [2024-07-26 23:04:32.943133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.779 [2024-07-26 23:04:32.943160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.779 qpair failed and we were unable to recover it. 00:34:40.779 [2024-07-26 23:04:32.943335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.779 [2024-07-26 23:04:32.943361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.943510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.780 [2024-07-26 23:04:32.943535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.943710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.780 [2024-07-26 23:04:32.943735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.943905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.780 [2024-07-26 23:04:32.943933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.944108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.780 [2024-07-26 23:04:32.944133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.944280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.780 [2024-07-26 23:04:32.944305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.944451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.780 [2024-07-26 23:04:32.944476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.944638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.780 [2024-07-26 23:04:32.944666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.944829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.780 [2024-07-26 23:04:32.944853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.945084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.780 [2024-07-26 23:04:32.945113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.945326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.780 [2024-07-26 23:04:32.945358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.945522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.780 [2024-07-26 23:04:32.945550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.945740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.780 [2024-07-26 23:04:32.945766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.945923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.780 [2024-07-26 23:04:32.945950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.946146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.780 [2024-07-26 23:04:32.946174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.946358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.780 [2024-07-26 23:04:32.946385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.946548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.780 [2024-07-26 23:04:32.946573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.946784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.780 [2024-07-26 23:04:32.946812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.947003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.780 [2024-07-26 23:04:32.947029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.947183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.780 [2024-07-26 23:04:32.947209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.947405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.780 [2024-07-26 23:04:32.947431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.947591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.780 [2024-07-26 23:04:32.947620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.947803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.780 [2024-07-26 23:04:32.947831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.948021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.780 [2024-07-26 23:04:32.948046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.948255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.780 [2024-07-26 23:04:32.948280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.948420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.780 [2024-07-26 23:04:32.948445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.948618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.780 [2024-07-26 23:04:32.948661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.948824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.780 [2024-07-26 23:04:32.948852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.949023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.780 [2024-07-26 23:04:32.949049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.949294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.780 [2024-07-26 23:04:32.949322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.949508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.780 [2024-07-26 23:04:32.949537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.949723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.780 [2024-07-26 23:04:32.949750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.949938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.780 [2024-07-26 23:04:32.949964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.950122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.780 [2024-07-26 23:04:32.950151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.950334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.780 [2024-07-26 23:04:32.950362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.950543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.780 [2024-07-26 23:04:32.950571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.950742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.780 [2024-07-26 23:04:32.950768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.950953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.780 [2024-07-26 23:04:32.950985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.780 qpair failed and we were unable to recover it. 00:34:40.780 [2024-07-26 23:04:32.951155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.951183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-26 23:04:32.951339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.951367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-26 23:04:32.951539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.951564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-26 23:04:32.951727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.951752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-26 23:04:32.951907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.951935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-26 23:04:32.952102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.952128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-26 23:04:32.952321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.952346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-26 23:04:32.952537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.952565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-26 23:04:32.952716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.952744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-26 23:04:32.952943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.952968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-26 23:04:32.953139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.953164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-26 23:04:32.953310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.953335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-26 23:04:32.953523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.953552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-26 23:04:32.953744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.953772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-26 23:04:32.953957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.953982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-26 23:04:32.954168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.954197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-26 23:04:32.954385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.954412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-26 23:04:32.954593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.954621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-26 23:04:32.954795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.954820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-26 23:04:32.955011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.955037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-26 23:04:32.955238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.955268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-26 23:04:32.955452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.955480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-26 23:04:32.955697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.955723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-26 23:04:32.955896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.955924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-26 23:04:32.956117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.956142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-26 23:04:32.956314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.956339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-26 23:04:32.956483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.956508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-26 23:04:32.956671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.956699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-26 23:04:32.956885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.956912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-26 23:04:32.957071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.957100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-26 23:04:32.957290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.957316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-26 23:04:32.957506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.957534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-26 23:04:32.957723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.957751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-26 23:04:32.957933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.957961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-26 23:04:32.958135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.958162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-26 23:04:32.958323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.958351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-26 23:04:32.958501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.958529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-26 23:04:32.958688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-26 23:04:32.958716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.958881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.958907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.959120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.959148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.959377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.959403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.959595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.959623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.959809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.959834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.960022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.960049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.960266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.960294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.960480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.960509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.960706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.960731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.960920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.960948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.961146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.961175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.961332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.961360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.961548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.961573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.961789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.961817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.962043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.962086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.962311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.962336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.962516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.962542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.962771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.962799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.962998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.963023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.963219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.963247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.963464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.963490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.963655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.963683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.963903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.963928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.964091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.964117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.964307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.964333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.964563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.964588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.964787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.964812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.964989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.965016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.965213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.965240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.965456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.965489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.965692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.965717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.965935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.965963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.966155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.966181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.966332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.966357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.966502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.966527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.966715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.966745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.966914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.966941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-26 23:04:32.967136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-26 23:04:32.967164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.967351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-26 23:04:32.967378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.967557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-26 23:04:32.967585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.967753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-26 23:04:32.967778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.967941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-26 23:04:32.967966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.968178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-26 23:04:32.968204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.968366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-26 23:04:32.968394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.968581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-26 23:04:32.968606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.968750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-26 23:04:32.968775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.968947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-26 23:04:32.968972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.969196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-26 23:04:32.969225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.969380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-26 23:04:32.969406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.969587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-26 23:04:32.969615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.969802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-26 23:04:32.969830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.970015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-26 23:04:32.970043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.970242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-26 23:04:32.970267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.970462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-26 23:04:32.970490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.970679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-26 23:04:32.970707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.970920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-26 23:04:32.970948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.971115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-26 23:04:32.971145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.971312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-26 23:04:32.971337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.971510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-26 23:04:32.971535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.971729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-26 23:04:32.971755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.971984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-26 23:04:32.972009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.972155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-26 23:04:32.972181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.972323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-26 23:04:32.972348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.972521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-26 23:04:32.972548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.972709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-26 23:04:32.972735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.972916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-26 23:04:32.972944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.973165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-26 23:04:32.973191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.973364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-26 23:04:32.973389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.973535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-26 23:04:32.973560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.973704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-26 23:04:32.973729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.973899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-26 23:04:32.973924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.974085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-26 23:04:32.974114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.974298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-26 23:04:32.974323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.974513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-26 23:04:32.974541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.974753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-26 23:04:32.974781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-26 23:04:32.974947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-26 23:04:32.974972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-26 23:04:32.975144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-26 23:04:32.975169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-26 23:04:32.975366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-26 23:04:32.975394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-26 23:04:32.975586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-26 23:04:32.975612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-26 23:04:32.975795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-26 23:04:32.975824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-26 23:04:32.976040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-26 23:04:32.976071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-26 23:04:32.976297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-26 23:04:32.976325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-26 23:04:32.976495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-26 23:04:32.976523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-26 23:04:32.976698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-26 23:04:32.976723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-26 23:04:32.976903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-26 23:04:32.976928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-26 23:04:32.977123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-26 23:04:32.977151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-26 23:04:32.977339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-26 23:04:32.977367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-26 23:04:32.977544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-26 23:04:32.977572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-26 23:04:32.977787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-26 23:04:32.977813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-26 23:04:32.977995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-26 23:04:32.978023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-26 23:04:32.978220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-26 23:04:32.978248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-26 23:04:32.978414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-26 23:04:32.978439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-26 23:04:32.978613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-26 23:04:32.978638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-26 23:04:32.978794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-26 23:04:32.978822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-26 23:04:32.979025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-26 23:04:32.979050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-26 23:04:32.979276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-26 23:04:32.979303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-26 23:04:32.979492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-26 23:04:32.979517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-26 23:04:32.979677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-26 23:04:32.979705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-26 23:04:32.979882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-26 23:04:32.979910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-26 23:04:32.980094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-26 23:04:32.980122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-26 23:04:32.980294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-26 23:04:32.980319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-26 23:04:32.980475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-26 23:04:32.980502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-26 23:04:32.980715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-26 23:04:32.980742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-26 23:04:32.980901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-26 23:04:32.980928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-26 23:04:32.981124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-26 23:04:32.981150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-26 23:04:32.981320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-26 23:04:32.981345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-26 23:04:32.981482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-26 23:04:32.981507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-26 23:04:32.981679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-26 23:04:32.981704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-26 23:04:32.981865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-26 23:04:32.981890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-26 23:04:32.982084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-26 23:04:32.982113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-26 23:04:32.982292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-26 23:04:32.982320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-26 23:04:32.982503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-26 23:04:32.982531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-26 23:04:32.982725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-26 23:04:32.982750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-26 23:04:32.982942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-26 23:04:32.982970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-26 23:04:32.983130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-26 23:04:32.983158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-26 23:04:32.983347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-26 23:04:32.983375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-26 23:04:32.983549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-26 23:04:32.983574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-26 23:04:32.983739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-26 23:04:32.983764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-26 23:04:32.983931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-26 23:04:32.983959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-26 23:04:32.984138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-26 23:04:32.984164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-26 23:04:32.984357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-26 23:04:32.984383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-26 23:04:32.984568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-26 23:04:32.984596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-26 23:04:32.984811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-26 23:04:32.984839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-26 23:04:32.985030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-26 23:04:32.985055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-26 23:04:32.985256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-26 23:04:32.985286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-26 23:04:32.985455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-26 23:04:32.985480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-26 23:04:32.985668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-26 23:04:32.985696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-26 23:04:32.985893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-26 23:04:32.985919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-26 23:04:32.986049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-26 23:04:32.986084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-26 23:04:32.986240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-26 23:04:32.986268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-26 23:04:32.986448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-26 23:04:32.986476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-26 23:04:32.986644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-26 23:04:32.986670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-26 23:04:32.986837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-26 23:04:32.986862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-26 23:04:32.987081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-26 23:04:32.987110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-26 23:04:32.987328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-26 23:04:32.987356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-26 23:04:32.987572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-26 23:04:32.987600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-26 23:04:32.987801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-26 23:04:32.987826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-26 23:04:32.988023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-26 23:04:32.988051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-26 23:04:32.988258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-26 23:04:32.988283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-26 23:04:32.988444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-26 23:04:32.988469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-26 23:04:32.988639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-26 23:04:32.988664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-26 23:04:32.988806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-26 23:04:32.988831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-26 23:04:32.989000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-26 23:04:32.989025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-26 23:04:32.989222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-26 23:04:32.989250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-26 23:04:32.989412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-26 23:04:32.989438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-26 23:04:32.989639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-26 23:04:32.989666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-26 23:04:32.989823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-26 23:04:32.989851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-26 23:04:32.990040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-26 23:04:32.990075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-26 23:04:32.990294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-26 23:04:32.990319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-26 23:04:32.990511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-26 23:04:32.990539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-26 23:04:32.990699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-26 23:04:32.990727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-26 23:04:32.990921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-26 23:04:32.990953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-26 23:04:32.991147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-26 23:04:32.991173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-26 23:04:32.991359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-26 23:04:32.991387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-26 23:04:32.991600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-26 23:04:32.991625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-26 23:04:32.991816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-26 23:04:32.991843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-26 23:04:32.992035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-26 23:04:32.992067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-26 23:04:32.992215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-26 23:04:32.992240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-26 23:04:32.992387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-26 23:04:32.992414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-26 23:04:32.992598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-26 23:04:32.992625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-26 23:04:32.992811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-26 23:04:32.992836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-26 23:04:32.992991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-26 23:04:32.993019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-26 23:04:32.993237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-26 23:04:32.993263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-26 23:04:32.993428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-26 23:04:32.993453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-26 23:04:32.993657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-26 23:04:32.993682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-26 23:04:32.993826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-26 23:04:32.993851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-26 23:04:32.994032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-26 23:04:32.994067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-26 23:04:32.994253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-26 23:04:32.994281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-26 23:04:32.994459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-26 23:04:32.994484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-26 23:04:32.994658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-26 23:04:32.994684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-26 23:04:32.994848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-26 23:04:32.994873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-26 23:04:32.995072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-26 23:04:32.995101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-26 23:04:32.995301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-26 23:04:32.995327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-26 23:04:32.995474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-26 23:04:32.995500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-26 23:04:32.995649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-26 23:04:32.995675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-26 23:04:32.995857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-26 23:04:32.995885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-26 23:04:32.996073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:32.996099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:32.996290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:32.996319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:32.996530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:32.996563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:32.996749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:32.996778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:32.996996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:32.997021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:32.997231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:32.997259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:32.997469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:32.997497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:32.997682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:32.997708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:32.997878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:32.997904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:32.998046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:32.998079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:32.998268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:32.998296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:32.998499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:32.998525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:32.998712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:32.998738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:32.998962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:32.999010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:32.999183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:32.999212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:32.999385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:32.999410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:32.999584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:32.999609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:32.999804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:32.999832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:33.000022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:33.000050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:33.000229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:33.000254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:33.000423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:33.000448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:33.000634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:33.000663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:33.000876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:33.000904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:33.001096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:33.001125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:33.001291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:33.001317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:33.001481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:33.001507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:33.001659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:33.001684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:33.001850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:33.001875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:33.002013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:33.002038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:33.002215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:33.002241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:33.002396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:33.002422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:33.002647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:33.002675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:33.002868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:33.002894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:33.003109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:33.003138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:33.003326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:33.003354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:33.003542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:33.003570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:33.003759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:33.003784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:33.003968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-26 23:04:33.003996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-26 23:04:33.004186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-26 23:04:33.004214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-26 23:04:33.004413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-26 23:04:33.004440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-26 23:04:33.004659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-26 23:04:33.004684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-26 23:04:33.004855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-26 23:04:33.004883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-26 23:04:33.005072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-26 23:04:33.005100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-26 23:04:33.005292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-26 23:04:33.005324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-26 23:04:33.005488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-26 23:04:33.005512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-26 23:04:33.005681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-26 23:04:33.005723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-26 23:04:33.005881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-26 23:04:33.005909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-26 23:04:33.006095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-26 23:04:33.006121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-26 23:04:33.006256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-26 23:04:33.006281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-26 23:04:33.006500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-26 23:04:33.006528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-26 23:04:33.006712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-26 23:04:33.006740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-26 23:04:33.006905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-26 23:04:33.006934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-26 23:04:33.007150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-26 23:04:33.007175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-26 23:04:33.007344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-26 23:04:33.007373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-26 23:04:33.007537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-26 23:04:33.007563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-26 23:04:33.007730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-26 23:04:33.007755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-26 23:04:33.007924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-26 23:04:33.007950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-26 23:04:33.008122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-26 23:04:33.008151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-26 23:04:33.008367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-26 23:04:33.008396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-26 23:04:33.008583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-26 23:04:33.008611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-26 23:04:33.008812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-26 23:04:33.008837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-26 23:04:33.009025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-26 23:04:33.009053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-26 23:04:33.009290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-26 23:04:33.009316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-26 23:04:33.009510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-26 23:04:33.009539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-26 23:04:33.009732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-26 23:04:33.009757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-26 23:04:33.009904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-26 23:04:33.009929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-26 23:04:33.010144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-26 23:04:33.010174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-26 23:04:33.010362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-26 23:04:33.010390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-26 23:04:33.010582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-26 23:04:33.010609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-26 23:04:33.010827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-26 23:04:33.010855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-26 23:04:33.011044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-26 23:04:33.011085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-26 23:04:33.011301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-26 23:04:33.011329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-26 23:04:33.011543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-26 23:04:33.011568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-26 23:04:33.011783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-26 23:04:33.011810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-26 23:04:33.012006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-26 23:04:33.012034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-26 23:04:33.012236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-26 23:04:33.012264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-26 23:04:33.012485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-26 23:04:33.012510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-26 23:04:33.012699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-26 23:04:33.012727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-26 23:04:33.012928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-26 23:04:33.012953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-26 23:04:33.013153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-26 23:04:33.013181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-26 23:04:33.013377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-26 23:04:33.013402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-26 23:04:33.013564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-26 23:04:33.013589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-26 23:04:33.013810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-26 23:04:33.013835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-26 23:04:33.014028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-26 23:04:33.014054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-26 23:04:33.014262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-26 23:04:33.014287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-26 23:04:33.014459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-26 23:04:33.014484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-26 23:04:33.014639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-26 23:04:33.014665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-26 23:04:33.014851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-26 23:04:33.014881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-26 23:04:33.015097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-26 23:04:33.015124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-26 23:04:33.015308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-26 23:04:33.015336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-26 23:04:33.015525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-26 23:04:33.015552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-26 23:04:33.015737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-26 23:04:33.015765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-26 23:04:33.015960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-26 23:04:33.015985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-26 23:04:33.016181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-26 23:04:33.016210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-26 23:04:33.016372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-26 23:04:33.016400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-26 23:04:33.016590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-26 23:04:33.016618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-26 23:04:33.016780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-26 23:04:33.016806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-26 23:04:33.017023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-26 23:04:33.017055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-26 23:04:33.017263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-26 23:04:33.017291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-26 23:04:33.017482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-26 23:04:33.017508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-26 23:04:33.017656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-26 23:04:33.017681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-26 23:04:33.017817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-26 23:04:33.017842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-26 23:04:33.018036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-26 23:04:33.018068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-26 23:04:33.018251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-26 23:04:33.018280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-26 23:04:33.018471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-26 23:04:33.018496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-26 23:04:33.018687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-26 23:04:33.018716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-26 23:04:33.018889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-26 23:04:33.018914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-26 23:04:33.019084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-26 23:04:33.019110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-26 23:04:33.019280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-26 23:04:33.019305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.019464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-26 23:04:33.019491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.019672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-26 23:04:33.019700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.019921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-26 23:04:33.019950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.020109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-26 23:04:33.020135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.020307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-26 23:04:33.020333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.020477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-26 23:04:33.020502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.020687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-26 23:04:33.020712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.020914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-26 23:04:33.020939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.021159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-26 23:04:33.021187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.021339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-26 23:04:33.021367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.021586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-26 23:04:33.021611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.021817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-26 23:04:33.021842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.022043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-26 23:04:33.022078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.022239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-26 23:04:33.022269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.022425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-26 23:04:33.022453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.022642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-26 23:04:33.022667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.022862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-26 23:04:33.022891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.023068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-26 23:04:33.023094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.023323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-26 23:04:33.023351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.023571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-26 23:04:33.023596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.023786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-26 23:04:33.023814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.023961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-26 23:04:33.023989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.024175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-26 23:04:33.024203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.024362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-26 23:04:33.024389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.024576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-26 23:04:33.024604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.024797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-26 23:04:33.024822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.024988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-26 23:04:33.025013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.025181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-26 23:04:33.025206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.025403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-26 23:04:33.025432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.025632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-26 23:04:33.025658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.025829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-26 23:04:33.025854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.026048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-26 23:04:33.026082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.026308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-26 23:04:33.026336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.026511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-26 23:04:33.026537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.026727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-26 23:04:33.026755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.026921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-26 23:04:33.026947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.027116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-26 23:04:33.027142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-26 23:04:33.027324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.027352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.027521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.027549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.027739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.027765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.027954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.027982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.028174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.028203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.028384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.028412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.028609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.028634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.028853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.028881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.029073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.029102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.029315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.029343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.029500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.029525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.029676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.029703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.029869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.029895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.030097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.030126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.030290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.030315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.030487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.030531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.030687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.030715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.030860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.030888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.031097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.031123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.031317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.031349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.031512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.031539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.031725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.031752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.031939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.031963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.032181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.032211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.032380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.032409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.032574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.032600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.032768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.032794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.032977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.033005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.033190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.033219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.033408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.033433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.033603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.033628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.033814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.033842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.034031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.034067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.034258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.034287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.034454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.034479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.034654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.034697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.034877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.034905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.035102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-26 23:04:33.035128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-26 23:04:33.035272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.035298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-26 23:04:33.035498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.035523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-26 23:04:33.035688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.035718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-26 23:04:33.035882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.035911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-26 23:04:33.036076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.036103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-26 23:04:33.036317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.036345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-26 23:04:33.036554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.036582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-26 23:04:33.036734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.036762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-26 23:04:33.036954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.036983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-26 23:04:33.037154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.037181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-26 23:04:33.037379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.037404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-26 23:04:33.037573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.037598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-26 23:04:33.037733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.037759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-26 23:04:33.037930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.037955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-26 23:04:33.038127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.038155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-26 23:04:33.038331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.038359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-26 23:04:33.038556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.038584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-26 23:04:33.038795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.038823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-26 23:04:33.038976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.039004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-26 23:04:33.039162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.039192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-26 23:04:33.039399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.039424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-26 23:04:33.039588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.039616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-26 23:04:33.039833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.039861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-26 23:04:33.040078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.040107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-26 23:04:33.040299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.040325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-26 23:04:33.040493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.040518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-26 23:04:33.040700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.040728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-26 23:04:33.040925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.040951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-26 23:04:33.041119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.041145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-26 23:04:33.041356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.041384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-26 23:04:33.041543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.041571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-26 23:04:33.041784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.041812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-26 23:04:33.041997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.042022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-26 23:04:33.042185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.042211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-26 23:04:33.042391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.042419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-26 23:04:33.042642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.042671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-26 23:04:33.042844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.042869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-26 23:04:33.043056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-26 23:04:33.043091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-26 23:04:33.043280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-26 23:04:33.043309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-26 23:04:33.043526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-26 23:04:33.043554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-26 23:04:33.043741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-26 23:04:33.043766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-26 23:04:33.043939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-26 23:04:33.043965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-26 23:04:33.044112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-26 23:04:33.044138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-26 23:04:33.044283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-26 23:04:33.044308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-26 23:04:33.044507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-26 23:04:33.044532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-26 23:04:33.044668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-26 23:04:33.044693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-26 23:04:33.044844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-26 23:04:33.044869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-26 23:04:33.045035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-26 23:04:33.045076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-26 23:04:33.045244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-26 23:04:33.045269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-26 23:04:33.045490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-26 23:04:33.045519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-26 23:04:33.045698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-26 23:04:33.045726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-26 23:04:33.045907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-26 23:04:33.045936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-26 23:04:33.046161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-26 23:04:33.046187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-26 23:04:33.046378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-26 23:04:33.046406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-26 23:04:33.046551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-26 23:04:33.046578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-26 23:04:33.046761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-26 23:04:33.046789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-26 23:04:33.047003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-26 23:04:33.047028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-26 23:04:33.047221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-26 23:04:33.047250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-26 23:04:33.047395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-26 23:04:33.047423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-26 23:04:33.047604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-26 23:04:33.047632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-26 23:04:33.047853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-26 23:04:33.047878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-26 23:04:33.048024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-26 23:04:33.048049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-26 23:04:33.048265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-26 23:04:33.048308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-26 23:04:33.048472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-26 23:04:33.048501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-26 23:04:33.048683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-26 23:04:33.048708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-26 23:04:33.048891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-26 23:04:33.048919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-26 23:04:33.049137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-26 23:04:33.049166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-26 23:04:33.049357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-26 23:04:33.049382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-26 23:04:33.049577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-26 23:04:33.049602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-26 23:04:33.049764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-26 23:04:33.049791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-26 23:04:33.049957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.049984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-26 23:04:33.050194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.050223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-26 23:04:33.050432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.050458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-26 23:04:33.050667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.050695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-26 23:04:33.050884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.050914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-26 23:04:33.051104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.051131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-26 23:04:33.051301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.051330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-26 23:04:33.051526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.051554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-26 23:04:33.051737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.051765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-26 23:04:33.051960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.051986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-26 23:04:33.052181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.052207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-26 23:04:33.052382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.052407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-26 23:04:33.052594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.052624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-26 23:04:33.052792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.052821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-26 23:04:33.053037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.053071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-26 23:04:33.053261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.053289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-26 23:04:33.053473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.053501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-26 23:04:33.053681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.053709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-26 23:04:33.053904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.053929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-26 23:04:33.054126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.054155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-26 23:04:33.054360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.054385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-26 23:04:33.054571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.054599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-26 23:04:33.054791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.054816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-26 23:04:33.055000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.055028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-26 23:04:33.055179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.055208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-26 23:04:33.055425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.055450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-26 23:04:33.055623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.055649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-26 23:04:33.055872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.055901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-26 23:04:33.056082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.056107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-26 23:04:33.056321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.056349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-26 23:04:33.056516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.056541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-26 23:04:33.056683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.056708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-26 23:04:33.056879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.056904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-26 23:04:33.057082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.057111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-26 23:04:33.057284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.057309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-26 23:04:33.057502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.057530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-26 23:04:33.057682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.057710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-26 23:04:33.057879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-26 23:04:33.057911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.058074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.058101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.058309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.058338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.058519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.058544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.058753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.058781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.058975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.059000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.059150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.059176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.059367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.059395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.059583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.059611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.059800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.059825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.060016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.060043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.060262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.060290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.060485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.060510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.060682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.060707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.060921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.060949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.061129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.061158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.061349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.061374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.061519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.061544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.061712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.061737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.061920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.061945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.062129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.062158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.062326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.062352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.062514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.062542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.062688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.062722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.062901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.062929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.063146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.063171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.063329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.063354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.063575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.063603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.063788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.063816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.064009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.064034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.064218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.064243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.064470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.064496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.064673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.064699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.064869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.064895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.065111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.065140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.065290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.065317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.065529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.065557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.065733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.065759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-26 23:04:33.065911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-26 23:04:33.065939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.066120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.066149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.066334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.066363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.066553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.066578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.066753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.066781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.066973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.067001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.067178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.067206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.067368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.067393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.067578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.067606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.067760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.067787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.067996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.068024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.068219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.068245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.068396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.068421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.068596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.068621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.068787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.068812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.068952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.068976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.069117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.069143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.069315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.069341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.069543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.069570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.069754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.069779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.069967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.069995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.070210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.070235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.070394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.070422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.070639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.070664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.070868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.070893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.071102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.071130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.071334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.071359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.071553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.071578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.071748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.071773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.071992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.072020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.072226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.072252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.072423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.072448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.072618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.072642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.072840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.072868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.073054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.073091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.073281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.073306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.073454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.073480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.073653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.073680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-26 23:04:33.073879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-26 23:04:33.073908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.074104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.074131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.074328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.074353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.074553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.074585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.074805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.074831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.075000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.075026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.075260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.075289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.075484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.075509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.075722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.075750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.075940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.075965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.076161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.076190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.076345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.076373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.076562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.076590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.076756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.076783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.077012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.077040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.077216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.077249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.077394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.077423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.077620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.077645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.077856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.077884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.078075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.078104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.078296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.078324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.078537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.078562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.078781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.078808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.078995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.079023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.079244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.079273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.079462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.079487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.079653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.079680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.079870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.079898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.080085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.080114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.080273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.080298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.080457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.080482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.080700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.080728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.080908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.080936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.081121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.081146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.081343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.081371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.081560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.081589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.081786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.081813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.081985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.082010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-26 23:04:33.082221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-26 23:04:33.082251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-26 23:04:33.082463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-26 23:04:33.082491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-26 23:04:33.082671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-26 23:04:33.082699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-26 23:04:33.082921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-26 23:04:33.082946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-26 23:04:33.083138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-26 23:04:33.083171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-26 23:04:33.083360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-26 23:04:33.083385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-26 23:04:33.083550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-26 23:04:33.083592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-26 23:04:33.083789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-26 23:04:33.083814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-26 23:04:33.083982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-26 23:04:33.084007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-26 23:04:33.084179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-26 23:04:33.084205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-26 23:04:33.084401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-26 23:04:33.084426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-26 23:04:33.084594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-26 23:04:33.084619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-26 23:04:33.084807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-26 23:04:33.084834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-26 23:04:33.085030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-26 23:04:33.085055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-26 23:04:33.085203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-26 23:04:33.085228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-26 23:04:33.085398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-26 23:04:33.085424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-26 23:04:33.085617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-26 23:04:33.085645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-26 23:04:33.085832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-26 23:04:33.085860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-26 23:04:33.086040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-26 23:04:33.086075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-26 23:04:33.086246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-26 23:04:33.086271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-26 23:04:33.086429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-26 23:04:33.086457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-26 23:04:33.086647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-26 23:04:33.086675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-26 23:04:33.086860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-26 23:04:33.086888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-26 23:04:33.087105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-26 23:04:33.087131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-26 23:04:33.087344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-26 23:04:33.087372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-26 23:04:33.087553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-26 23:04:33.087581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-26 23:04:33.087761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-26 23:04:33.087789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-26 23:04:33.088005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-26 23:04:33.088030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-26 23:04:33.088230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-26 23:04:33.088255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-26 23:04:33.088464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-26 23:04:33.088492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-26 23:04:33.088682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-26 23:04:33.088710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-26 23:04:33.088895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-26 23:04:33.088924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-26 23:04:33.089082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-26 23:04:33.089111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-26 23:04:33.089299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-26 23:04:33.089326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.089477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.089503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.089664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.089689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.089822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.089847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.090031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.090067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.090257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.090285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.090476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.090501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.090720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.090747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.090905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.090932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.091083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.091112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.091280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.091305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.091491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.091519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.091710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.091738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.091916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.091944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.092160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.092186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.092374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.092402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.092587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.092611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.092767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.092810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.092978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.093005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.093195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.093223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.093410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.093438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.093594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.093622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.093799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.093824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.094015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.094043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.094210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.094239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.094432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.094457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.094609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.094634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.094772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.094815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.095028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.095056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.095232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.095260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.095477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.095502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.095729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.095757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.095971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.095999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.096194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.096220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.096391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.096416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.096617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.096645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.096834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.096862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.097091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.097116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-26 23:04:33.097311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-26 23:04:33.097336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.097502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-26 23:04:33.097531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.097697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-26 23:04:33.097725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.097909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-26 23:04:33.097937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.098123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-26 23:04:33.098148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.098334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-26 23:04:33.098362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.098553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-26 23:04:33.098580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.098755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-26 23:04:33.098781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.098928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-26 23:04:33.098954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.099125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-26 23:04:33.099151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.099342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-26 23:04:33.099370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.099521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-26 23:04:33.099549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.099711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-26 23:04:33.099738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.099958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-26 23:04:33.099986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.100172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-26 23:04:33.100201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.100354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-26 23:04:33.100382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.100551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-26 23:04:33.100576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.100792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-26 23:04:33.100820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.101038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-26 23:04:33.101070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.101263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-26 23:04:33.101291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.101486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-26 23:04:33.101511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.101732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-26 23:04:33.101759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.101948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-26 23:04:33.101976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.102167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-26 23:04:33.102195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.102358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-26 23:04:33.102384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.102528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-26 23:04:33.102553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.102753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-26 23:04:33.102781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.102986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-26 23:04:33.103014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.103211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-26 23:04:33.103241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.103434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-26 23:04:33.103462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.103650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-26 23:04:33.103675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.103870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-26 23:04:33.103897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.104085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-26 23:04:33.104111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.104313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-26 23:04:33.104341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.104521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-26 23:04:33.104549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.104689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-26 23:04:33.104717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.104911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-26 23:04:33.104936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.105104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-26 23:04:33.105130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-26 23:04:33.105294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.105337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.105536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.105561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.105735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.105759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.105940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.105968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.106164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.106193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.106378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.106408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.106592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.106618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.106757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.106782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.106992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.107020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.107197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.107226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.107417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.107442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.107587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.107612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.107807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.107835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.108050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.108087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.108276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.108301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.108516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.108544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.108708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.108733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.108875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.108904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.109103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.109129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.109323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.109351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.109571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.109599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.109792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.109820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.109990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.110015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.110194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.110220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.110356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.110382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.110523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.110564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.110727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.110752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.110943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.110972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.111128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.111157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.111336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.111365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.111533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.111559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.111708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.111734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.111925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.111953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.112129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.112158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.112351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.112376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.112522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.112565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.112763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.112788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.113008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-26 23:04:33.113036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-26 23:04:33.113214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-26 23:04:33.113239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-26 23:04:33.113384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-26 23:04:33.113410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-26 23:04:33.113596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-26 23:04:33.113623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-26 23:04:33.113819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-26 23:04:33.113844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-26 23:04:33.113987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-26 23:04:33.114012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-26 23:04:33.114195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-26 23:04:33.114224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-26 23:04:33.114442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-26 23:04:33.114467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-26 23:04:33.114646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-26 23:04:33.114671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-26 23:04:33.114849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-26 23:04:33.114874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-26 23:04:33.115069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-26 23:04:33.115098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-26 23:04:33.115286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-26 23:04:33.115314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-26 23:04:33.115496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-26 23:04:33.115524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-26 23:04:33.115718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-26 23:04:33.115743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-26 23:04:33.115879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-26 23:04:33.115904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-26 23:04:33.116081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-26 23:04:33.116124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-26 23:04:33.116316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-26 23:04:33.116344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-26 23:04:33.116531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-26 23:04:33.116556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-26 23:04:33.116729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-26 23:04:33.116754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-26 23:04:33.116968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-26 23:04:33.116995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-26 23:04:33.117186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-26 23:04:33.117214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-26 23:04:33.117405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-26 23:04:33.117430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-26 23:04:33.117596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-26 23:04:33.117622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-26 23:04:33.117790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-26 23:04:33.117815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-26 23:04:33.117979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-26 23:04:33.118006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-26 23:04:33.118200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-26 23:04:33.118226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-26 23:04:33.118420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-26 23:04:33.118448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-26 23:04:33.118630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-26 23:04:33.118658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-26 23:04:33.118820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-26 23:04:33.118848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-26 23:04:33.119032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-26 23:04:33.119057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-26 23:04:33.119284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-26 23:04:33.119313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-26 23:04:33.119500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-26 23:04:33.119528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-26 23:04:33.119721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-26 23:04:33.119749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-26 23:04:33.119941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-26 23:04:33.119966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-26 23:04:33.120157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-26 23:04:33.120185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-26 23:04:33.120350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-26 23:04:33.120378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-26 23:04:33.120564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-26 23:04:33.120592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-26 23:04:33.120762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-26 23:04:33.120787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-26 23:04:33.121006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-26 23:04:33.121034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-26 23:04:33.121237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-26 23:04:33.121263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-26 23:04:33.121480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-26 23:04:33.121508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-26 23:04:33.121719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-26 23:04:33.121744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-26 23:04:33.121911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-26 23:04:33.121939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-26 23:04:33.122129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-26 23:04:33.122159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-26 23:04:33.122373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-26 23:04:33.122401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-26 23:04:33.122559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-26 23:04:33.122584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-26 23:04:33.122800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-26 23:04:33.122828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-26 23:04:33.123022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-26 23:04:33.123047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-26 23:04:33.123197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-26 23:04:33.123242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-26 23:04:33.123443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-26 23:04:33.123468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-26 23:04:33.123664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-26 23:04:33.123691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-26 23:04:33.123843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-26 23:04:33.123871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-26 23:04:33.124071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-26 23:04:33.124099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-26 23:04:33.124259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-26 23:04:33.124284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-26 23:04:33.124454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-26 23:04:33.124480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-26 23:04:33.124697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-26 23:04:33.124725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-26 23:04:33.124952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-26 23:04:33.124977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-26 23:04:33.125174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-26 23:04:33.125200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-26 23:04:33.125420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-26 23:04:33.125447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-26 23:04:33.125669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-26 23:04:33.125694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-26 23:04:33.125863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-26 23:04:33.125888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-26 23:04:33.126067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-26 23:04:33.126092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-26 23:04:33.126234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-26 23:04:33.126276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-26 23:04:33.126491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-26 23:04:33.126519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-26 23:04:33.126714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-26 23:04:33.126741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-26 23:04:33.126904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-26 23:04:33.126929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-26 23:04:33.127148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-26 23:04:33.127177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-26 23:04:33.127368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-26 23:04:33.127396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-26 23:04:33.127579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-26 23:04:33.127606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-26 23:04:33.127774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-26 23:04:33.127800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-26 23:04:33.127997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-26 23:04:33.128022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-26 23:04:33.128181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.128207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-26 23:04:33.128385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.128410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-26 23:04:33.128605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.128630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-26 23:04:33.128826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.128854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-26 23:04:33.129036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.129075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-26 23:04:33.129254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.129280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-26 23:04:33.129434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.129459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-26 23:04:33.129660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.129685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-26 23:04:33.129851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.129879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-26 23:04:33.130068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.130097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-26 23:04:33.130288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.130313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-26 23:04:33.130503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.130532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-26 23:04:33.130684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.130712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-26 23:04:33.130896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.130924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-26 23:04:33.131140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.131166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-26 23:04:33.131316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.131357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-26 23:04:33.131510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.131538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-26 23:04:33.131700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.131728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-26 23:04:33.131913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.131938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-26 23:04:33.132082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.132108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-26 23:04:33.132302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.132344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-26 23:04:33.132493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.132521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-26 23:04:33.132732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.132757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-26 23:04:33.132944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.132972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-26 23:04:33.133168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.133193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-26 23:04:33.133359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.133387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-26 23:04:33.133574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.133599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-26 23:04:33.133781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.133808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-26 23:04:33.134020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.134048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-26 23:04:33.134244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.134273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-26 23:04:33.134434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.134459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-26 23:04:33.134674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.134702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-26 23:04:33.134896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.134923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-26 23:04:33.135133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.135163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-26 23:04:33.135351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.135376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-26 23:04:33.135566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.135594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-26 23:04:33.135750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.135779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-26 23:04:33.135963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-26 23:04:33.135991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.136155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.136180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.136393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.136421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.136609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.136637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.136801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.136829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.137022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.137047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.137240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.137265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.137447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.137472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.137668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.137696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.137889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.137914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.138071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.138097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.138265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.138290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.138522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.138550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.138740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.138766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.138978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.139006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.139195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.139224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.139416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.139441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.139609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.139634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.139849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.139877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.140033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.140070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.140232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.140260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.140445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.140471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.140630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.140658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.140842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.140870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.141032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.141077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.141286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.141311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.141511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.141536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.141734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.141762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.141913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.141941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.142119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.142145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.142296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.142321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.142486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.142511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.142687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.142715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.142879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.142903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.143073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.143099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.143242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.143271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.143414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.143439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.143601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.143626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-26 23:04:33.143844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-26 23:04:33.143872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.144066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.144095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.144278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.144306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.144471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.144496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.144718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.144746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.144903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.144931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.145119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.145145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.145343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.145372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.145559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.145587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.145775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.145803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.145958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.145988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.146160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.146187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.146355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.146383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.146577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.146602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.146792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.146820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.147009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.147034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.147199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.147224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.147401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.147426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.147618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.147646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.147815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.147840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.148014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.148038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.148262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.148291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.148474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.148502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.148667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.148692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.148860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.148891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.149068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.149093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.149262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.149290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.149512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.149537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.149684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.149709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.149878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.149924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.150104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.150133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.150322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.150347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.150539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.150567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.150775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.150803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.150969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.150994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.151163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.151188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.151403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.151430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.151615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.151647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-26 23:04:33.151844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-26 23:04:33.151872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-26 23:04:33.152065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-26 23:04:33.152090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-26 23:04:33.152257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-26 23:04:33.152285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-26 23:04:33.152501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-26 23:04:33.152529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-26 23:04:33.152698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-26 23:04:33.152722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-26 23:04:33.152919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-26 23:04:33.152944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-26 23:04:33.153136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-26 23:04:33.153165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-26 23:04:33.153353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-26 23:04:33.153381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-26 23:04:33.153559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-26 23:04:33.153587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-26 23:04:33.153780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-26 23:04:33.153807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-26 23:04:33.153998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-26 23:04:33.154026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-26 23:04:33.154242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-26 23:04:33.154271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-26 23:04:33.154458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-26 23:04:33.154485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-26 23:04:33.154709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-26 23:04:33.154738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-26 23:04:33.154933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-26 23:04:33.154961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-26 23:04:33.155137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-26 23:04:33.155166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-26 23:04:33.155356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-26 23:04:33.155386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-26 23:04:33.155620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-26 23:04:33.155646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-26 23:04:33.155799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-26 23:04:33.155827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-26 23:04:33.156030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-26 23:04:33.156055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-26 23:04:33.156220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-26 23:04:33.156245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-26 23:04:33.156442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-26 23:04:33.156467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-26 23:04:33.156664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-26 23:04:33.156692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-26 23:04:33.156845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-26 23:04:33.156873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-26 23:04:33.157085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-26 23:04:33.157113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-26 23:04:33.157301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-26 23:04:33.157326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-26 23:04:33.157497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-26 23:04:33.157522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-26 23:04:33.157743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-26 23:04:33.157771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-26 23:04:33.157930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-26 23:04:33.157958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-26 23:04:33.158151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-26 23:04:33.158177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-26 23:04:33.158351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-26 23:04:33.158376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-26 23:04:33.158541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-26 23:04:33.158571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-26 23:04:33.158770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-26 23:04:33.158799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-26 23:04:33.158965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.158990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.159159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.159186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.159383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.159409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.159579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.159607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.159766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.159791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.159966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.159991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.160145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.160171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.160360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.160388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.160561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.160586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.160767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.160794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.160973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.161001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.161188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.161216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.161385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.161410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.161587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.161612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.161811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.161835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.162064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.162093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.162263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.162289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.162439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.162465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.162631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.162656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.162849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.162879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.163084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.163122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.163322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.163367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.163704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.163774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.164011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.164044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.164304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.164334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.164610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.164639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.164856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.164901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.165097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.165130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.165349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.165377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.165714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.165767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.165988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.166019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.166262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.166291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.166485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.166513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.166707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.166735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.166962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.166999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.167202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.167231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.167393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.167421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.167623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-26 23:04:33.167653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-26 23:04:33.167892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-26 23:04:33.167922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-26 23:04:33.168143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-26 23:04:33.168176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-26 23:04:33.168383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-26 23:04:33.168410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-26 23:04:33.168628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-26 23:04:33.168658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-26 23:04:33.168846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-26 23:04:33.168877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-26 23:04:33.169084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-26 23:04:33.169116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-26 23:04:33.169332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-26 23:04:33.169360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-26 23:04:33.169565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-26 23:04:33.169597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-26 23:04:33.169837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-26 23:04:33.169868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-26 23:04:33.170081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-26 23:04:33.170113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-26 23:04:33.170369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-26 23:04:33.170397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-26 23:04:33.170631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-26 23:04:33.170663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-26 23:04:33.170944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-26 23:04:33.170976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-26 23:04:33.171155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-26 23:04:33.171188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-26 23:04:33.171416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-26 23:04:33.171445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-26 23:04:33.171653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-26 23:04:33.171684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-26 23:04:33.171921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-26 23:04:33.171953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-26 23:04:33.172174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-26 23:04:33.172204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-26 23:04:33.172394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-26 23:04:33.172422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-26 23:04:33.172590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-26 23:04:33.172619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-26 23:04:33.172882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-26 23:04:33.172938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-26 23:04:33.173186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-26 23:04:33.173219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-26 23:04:33.173403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-26 23:04:33.173432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-26 23:04:33.173649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-26 23:04:33.173681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-26 23:04:33.173919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-26 23:04:33.173947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-26 23:04:33.174163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-26 23:04:33.174196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-26 23:04:33.174433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-26 23:04:33.174461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-26 23:04:33.174659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-26 23:04:33.174689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-26 23:04:33.174859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-26 23:04:33.174887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-26 23:04:33.175082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-26 23:04:33.175110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-26 23:04:33.175337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-26 23:04:33.175365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-26 23:04:33.175583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-26 23:04:33.175614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-26 23:04:33.175954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-26 23:04:33.176006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-26 23:04:33.176231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-26 23:04:33.176260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-26 23:04:33.176452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-26 23:04:33.176481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-26 23:04:33.176698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-26 23:04:33.176729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-26 23:04:33.176940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-26 23:04:33.176972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-26 23:04:33.177214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-26 23:04:33.177246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-26 23:04:33.177481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-26 23:04:33.177510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-26 23:04:33.177755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-26 23:04:33.177786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-26 23:04:33.177968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-26 23:04:33.178000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-26 23:04:33.178183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-26 23:04:33.178214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-26 23:04:33.178422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-26 23:04:33.178449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-26 23:04:33.178685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-26 23:04:33.178716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-26 23:04:33.178932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-26 23:04:33.178961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-26 23:04:33.179121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-26 23:04:33.179149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-26 23:04:33.179332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-26 23:04:33.179361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-26 23:04:33.179531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-26 23:04:33.179559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-26 23:04:33.179727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-26 23:04:33.179755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-26 23:04:33.179975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-26 23:04:33.180020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-26 23:04:33.180244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-26 23:04:33.180272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-26 23:04:33.180506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-26 23:04:33.180537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-26 23:04:33.180820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-26 23:04:33.180852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-26 23:04:33.181069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-26 23:04:33.181102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-26 23:04:33.181341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-26 23:04:33.181370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-26 23:04:33.181620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-26 23:04:33.181652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-26 23:04:33.182001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-26 23:04:33.182055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-26 23:04:33.182275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-26 23:04:33.182307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-26 23:04:33.182552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-26 23:04:33.182581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-26 23:04:33.182805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-26 23:04:33.182834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-26 23:04:33.183076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-26 23:04:33.183108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-26 23:04:33.183318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-26 23:04:33.183350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-26 23:04:33.183561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-26 23:04:33.183590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-26 23:04:33.183804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-26 23:04:33.183842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-26 23:04:33.184082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-26 23:04:33.184114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-26 23:04:33.184311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-26 23:04:33.184341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-26 23:04:33.184557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-26 23:04:33.184585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-26 23:04:33.184794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-26 23:04:33.184826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-26 23:04:33.185027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-26 23:04:33.185067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-26 23:04:33.185276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-26 23:04:33.185308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-26 23:04:33.185517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-26 23:04:33.185545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-26 23:04:33.185784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-26 23:04:33.185816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-26 23:04:33.186032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-26 23:04:33.186070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-26 23:04:33.186282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-26 23:04:33.186314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-26 23:04:33.186501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.186530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-26 23:04:33.186720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.186748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-26 23:04:33.186988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.187020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-26 23:04:33.187246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.187277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-26 23:04:33.187491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.187520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-26 23:04:33.187729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.187761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-26 23:04:33.187971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.188003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-26 23:04:33.188246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.188279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-26 23:04:33.188497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.188526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-26 23:04:33.188738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.188770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-26 23:04:33.188977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.189010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-26 23:04:33.189243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.189272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-26 23:04:33.189459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.189489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-26 23:04:33.189726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.189758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-26 23:04:33.189989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.190017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-26 23:04:33.190258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.190290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-26 23:04:33.190480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.190509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-26 23:04:33.190822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.190872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-26 23:04:33.191072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.191102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-26 23:04:33.191342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.191374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-26 23:04:33.191584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.191613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-26 23:04:33.191772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.191801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-26 23:04:33.192008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.192038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-26 23:04:33.192294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.192326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-26 23:04:33.192567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.192596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-26 23:04:33.192830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.192861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-26 23:04:33.193068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.193101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-26 23:04:33.193319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.193352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-26 23:04:33.193590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.193619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-26 23:04:33.193803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.193839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-26 23:04:33.194052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.194092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-26 23:04:33.194326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.194354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-26 23:04:33.194536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.194564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-26 23:04:33.194913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.194968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-26 23:04:33.195180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.195212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-26 23:04:33.195424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.195458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-26 23:04:33.195700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.195729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-26 23:04:33.195975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-26 23:04:33.196004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-26 23:04:33.196172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-26 23:04:33.196202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-26 23:04:33.196409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-26 23:04:33.196442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-26 23:04:33.196634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-26 23:04:33.196662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-26 23:04:33.196854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-26 23:04:33.196883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-26 23:04:33.197110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-26 23:04:33.197142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-26 23:04:33.197368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-26 23:04:33.197400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-26 23:04:33.197642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-26 23:04:33.197671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-26 23:04:33.197915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-26 23:04:33.197947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-26 23:04:33.198150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-26 23:04:33.198182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-26 23:04:33.198373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-26 23:04:33.198405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-26 23:04:33.198620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-26 23:04:33.198650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-26 23:04:33.198863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-26 23:04:33.198894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-26 23:04:33.199078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-26 23:04:33.199111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-26 23:04:33.199326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-26 23:04:33.199356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-26 23:04:33.199520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-26 23:04:33.199548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-26 23:04:33.199764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-26 23:04:33.199795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-26 23:04:33.200009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-26 23:04:33.200041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-26 23:04:33.200249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-26 23:04:33.200281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-26 23:04:33.200514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-26 23:04:33.200542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-26 23:04:33.200731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-26 23:04:33.200763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-26 23:04:33.200966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-26 23:04:33.200998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-26 23:04:33.201246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-26 23:04:33.201279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-26 23:04:33.201516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-26 23:04:33.201545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-26 23:04:33.201750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-26 23:04:33.201782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-26 23:04:33.202020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-26 23:04:33.202052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-26 23:04:33.202293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-26 23:04:33.202325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-26 23:04:33.202544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-26 23:04:33.202573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-26 23:04:33.202797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-26 23:04:33.202847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-26 23:04:33.203068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-26 23:04:33.203100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-26 23:04:33.203318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-26 23:04:33.203347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-26 23:04:33.203537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-26 23:04:33.203567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-26 23:04:33.203786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-26 23:04:33.203823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.204034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-26 23:04:33.204088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.204288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-26 23:04:33.204320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.204560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-26 23:04:33.204588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.204845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-26 23:04:33.204876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.205114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-26 23:04:33.205146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.205336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-26 23:04:33.205367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.205572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-26 23:04:33.205600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.205821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-26 23:04:33.205853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.206055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-26 23:04:33.206094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.206286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-26 23:04:33.206314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.206503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-26 23:04:33.206532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.206746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-26 23:04:33.206789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.206969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-26 23:04:33.207001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.207192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-26 23:04:33.207224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.207455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-26 23:04:33.207483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.207731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-26 23:04:33.207763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.207994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-26 23:04:33.208026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.208244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-26 23:04:33.208275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.208487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-26 23:04:33.208516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.208724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-26 23:04:33.208756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.208967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-26 23:04:33.208998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.209228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-26 23:04:33.209261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.209443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-26 23:04:33.209472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.209680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-26 23:04:33.209712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.209920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-26 23:04:33.209952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.210157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-26 23:04:33.210190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.210435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-26 23:04:33.210464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.210742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-26 23:04:33.210790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.210999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-26 23:04:33.211031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.211248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-26 23:04:33.211280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.211494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-26 23:04:33.211523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.211713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-26 23:04:33.211741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.211935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-26 23:04:33.211964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.212177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-26 23:04:33.212206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.212376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-26 23:04:33.212405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.212615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-26 23:04:33.212647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.212877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-26 23:04:33.212907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-26 23:04:33.213121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-26 23:04:33.213153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-26 23:04:33.213372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-26 23:04:33.213400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-26 23:04:33.213621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-26 23:04:33.213660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-26 23:04:33.213908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-26 23:04:33.213936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-26 23:04:33.214104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-26 23:04:33.214134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-26 23:04:33.214299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-26 23:04:33.214328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-26 23:04:33.214568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-26 23:04:33.214600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-26 23:04:33.214804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-26 23:04:33.214836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-26 23:04:33.215022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-26 23:04:33.215053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-26 23:04:33.215249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-26 23:04:33.215276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-26 23:04:33.215439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-26 23:04:33.215467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-26 23:04:33.215678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-26 23:04:33.215711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-26 23:04:33.215888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-26 23:04:33.215920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-26 23:04:33.216117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-26 23:04:33.216147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-26 23:04:33.216333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-26 23:04:33.216362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-26 23:04:33.216545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-26 23:04:33.216575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-26 23:04:33.216792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-26 23:04:33.216822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-26 23:04:33.217029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-26 23:04:33.217056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-26 23:04:33.217262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-26 23:04:33.217290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-26 23:04:33.217537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-26 23:04:33.217567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-26 23:04:33.217736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-26 23:04:33.217765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-26 23:04:33.218001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-26 23:04:33.218029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-26 23:04:33.218242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-26 23:04:33.218273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-26 23:04:33.218501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-26 23:04:33.218531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-26 23:04:33.218738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-26 23:04:33.218769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-26 23:04:33.218950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-26 23:04:33.218980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-26 23:04:33.219197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-26 23:04:33.219227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-26 23:04:33.219447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-26 23:04:33.219476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-26 23:04:33.219716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-26 23:04:33.219747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-26 23:04:33.219938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-26 23:04:33.219966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-26 23:04:33.220152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-26 23:04:33.220180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-26 23:04:33.220418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-26 23:04:33.220449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-26 23:04:33.220636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-26 23:04:33.220665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-26 23:04:33.220881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-26 23:04:33.220911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-26 23:04:33.221093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-26 23:04:33.221124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-26 23:04:33.221350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-26 23:04:33.221380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-26 23:04:33.221582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-26 23:04:33.221612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.221823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-26 23:04:33.221853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.222069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-26 23:04:33.222100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.222292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-26 23:04:33.222322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.222484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-26 23:04:33.222528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.222736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-26 23:04:33.222764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.222943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-26 23:04:33.222978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.223171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-26 23:04:33.223200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.223355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-26 23:04:33.223384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.223557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-26 23:04:33.223585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.223744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-26 23:04:33.223772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.223960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-26 23:04:33.223989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.224149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-26 23:04:33.224178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.224391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-26 23:04:33.224419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.224609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-26 23:04:33.224639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.224806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-26 23:04:33.224834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.225001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-26 23:04:33.225031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.225244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-26 23:04:33.225273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.225488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-26 23:04:33.225517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.225695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-26 23:04:33.225723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.225887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-26 23:04:33.225915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.226109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-26 23:04:33.226137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.226327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-26 23:04:33.226354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.226516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-26 23:04:33.226545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.226712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-26 23:04:33.226741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.226904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-26 23:04:33.226932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.227144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-26 23:04:33.227174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.227339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-26 23:04:33.227366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.227561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-26 23:04:33.227590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.227783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-26 23:04:33.227812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.228028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-26 23:04:33.228057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.228265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-26 23:04:33.228294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.228508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-26 23:04:33.228537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.228732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-26 23:04:33.228762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.228949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-26 23:04:33.228976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.229197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-26 23:04:33.229226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.229398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-26 23:04:33.229425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.229581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-26 23:04:33.229611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-26 23:04:33.229805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-26 23:04:33.229834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-26 23:04:33.230023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-26 23:04:33.230050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-26 23:04:33.230271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-26 23:04:33.230299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-26 23:04:33.230493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-26 23:04:33.230521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-26 23:04:33.230723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-26 23:04:33.230752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-26 23:04:33.230943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-26 23:04:33.230970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-26 23:04:33.231161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-26 23:04:33.231190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-26 23:04:33.231382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-26 23:04:33.231410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-26 23:04:33.231580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-26 23:04:33.231612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-26 23:04:33.231803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-26 23:04:33.231831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-26 23:04:33.232000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-26 23:04:33.232028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-26 23:04:33.232199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-26 23:04:33.232228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-26 23:04:33.232425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-26 23:04:33.232454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-26 23:04:33.232670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-26 23:04:33.232699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-26 23:04:33.232860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-26 23:04:33.232888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-26 23:04:33.233086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-26 23:04:33.233115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-26 23:04:33.233309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-26 23:04:33.233336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-26 23:04:33.233520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-26 23:04:33.233548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-26 23:04:33.233733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-26 23:04:33.233761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-26 23:04:33.233935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-26 23:04:33.233964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-26 23:04:33.234531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-26 23:04:33.234561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-26 23:04:33.234791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-26 23:04:33.234821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-26 23:04:33.235039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-26 23:04:33.235078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-26 23:04:33.235242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-26 23:04:33.235271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-26 23:04:33.235465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-26 23:04:33.235494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-26 23:04:33.235687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-26 23:04:33.235716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-26 23:04:33.235910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-26 23:04:33.235937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-26 23:04:33.236155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-26 23:04:33.236184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-26 23:04:33.236419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-26 23:04:33.236446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-26 23:04:33.236616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-26 23:04:33.236643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-26 23:04:33.236871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-26 23:04:33.236900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-26 23:04:33.237127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-26 23:04:33.237154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-26 23:04:33.237368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-26 23:04:33.237397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-26 23:04:33.237583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-26 23:04:33.237611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-26 23:04:33.237827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-26 23:04:33.237856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-26 23:04:33.238050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-26 23:04:33.238085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-26 23:04:33.238296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-26 23:04:33.238325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-26 23:04:33.238542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-26 23:04:33.238571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-26 23:04:33.238742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-26 23:04:33.238769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-26 23:04:33.238962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-26 23:04:33.238990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-26 23:04:33.239199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-26 23:04:33.239228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-26 23:04:33.239423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-26 23:04:33.239450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-26 23:04:33.239612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-26 23:04:33.239641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-26 23:04:33.239836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-26 23:04:33.239864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-26 23:04:33.240051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-26 23:04:33.240089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-26 23:04:33.240268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-26 23:04:33.240295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-26 23:04:33.240465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-26 23:04:33.240492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-26 23:04:33.240683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-26 23:04:33.240711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-26 23:04:33.240877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-26 23:04:33.240910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-26 23:04:33.241117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-26 23:04:33.241146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-26 23:04:33.241308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-26 23:04:33.241336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-26 23:04:33.241569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-26 23:04:33.241610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-26 23:04:33.241811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-26 23:04:33.241845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-26 23:04:33.242053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-26 23:04:33.242119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-26 23:04:33.242271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-26 23:04:33.242298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-26 23:04:33.242444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-26 23:04:33.242472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-26 23:04:33.242650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-26 23:04:33.242697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-26 23:04:33.242884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-26 23:04:33.242934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-26 23:04:33.243140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-26 23:04:33.243167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-26 23:04:33.243319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-26 23:04:33.243345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-26 23:04:33.243517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-26 23:04:33.243544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-26 23:04:33.243743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-26 23:04:33.243787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-26 23:04:33.243970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-26 23:04:33.243996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-26 23:04:33.244169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-26 23:04:33.244196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-26 23:04:33.244336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-26 23:04:33.244380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-26 23:04:33.244595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-26 23:04:33.244638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-26 23:04:33.244839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-26 23:04:33.244865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-26 23:04:33.245055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-26 23:04:33.245093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-26 23:04:33.245294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-26 23:04:33.245326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-26 23:04:33.245506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-26 23:04:33.245535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-26 23:04:33.245732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-26 23:04:33.245779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-26 23:04:33.245957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-26 23:04:33.245988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-26 23:04:33.246159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-26 23:04:33.246199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-26 23:04:33.246399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-26 23:04:33.246444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-26 23:04:33.246617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-26 23:04:33.246659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-26 23:04:33.246867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-26 23:04:33.246901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-26 23:04:33.247129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-26 23:04:33.247159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-26 23:04:33.247347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-26 23:04:33.247375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-26 23:04:33.247557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-26 23:04:33.247588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-26 23:04:33.247926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-26 23:04:33.247959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-26 23:04:33.248182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-26 23:04:33.248211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-26 23:04:33.248376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-26 23:04:33.248405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-26 23:04:33.248592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-26 23:04:33.248621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-26 23:04:33.248904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-26 23:04:33.248963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-26 23:04:33.249126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-26 23:04:33.249154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-26 23:04:33.249302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-26 23:04:33.249328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-26 23:04:33.249474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-26 23:04:33.249504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-26 23:04:33.249703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-26 23:04:33.249753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-26 23:04:33.249949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-26 23:04:33.250002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-26 23:04:33.250158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-26 23:04:33.250185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-26 23:04:33.250336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-26 23:04:33.250362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-26 23:04:33.250522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-26 23:04:33.250565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-26 23:04:33.250817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-26 23:04:33.250846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-26 23:04:33.251008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-26 23:04:33.251036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-26 23:04:33.251198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-26 23:04:33.251224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-26 23:04:33.251387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-26 23:04:33.251432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-26 23:04:33.251629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-26 23:04:33.251656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-26 23:04:33.251809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-26 23:04:33.251835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-26 23:04:33.252013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-26 23:04:33.252040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-26 23:04:33.252234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-26 23:04:33.252260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-26 23:04:33.252454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-26 23:04:33.252493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-26 23:04:33.252682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-26 23:04:33.252709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-26 23:04:33.252892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-26 23:04:33.252917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-26 23:04:33.253091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-26 23:04:33.253122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-26 23:04:33.253295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-26 23:04:33.253320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-26 23:04:33.253462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-26 23:04:33.253488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-26 23:04:33.253684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.253712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-26 23:04:33.254050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.254116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-26 23:04:33.254288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.254313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-26 23:04:33.254520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.254548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-26 23:04:33.254736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.254763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-26 23:04:33.254970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.254999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-26 23:04:33.255173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.255199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-26 23:04:33.255371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.255396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-26 23:04:33.255538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.255563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-26 23:04:33.255731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.255777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-26 23:04:33.255971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.256014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-26 23:04:33.256219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.256245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-26 23:04:33.256380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.256420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-26 23:04:33.256634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.256662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-26 23:04:33.256832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.256857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-26 23:04:33.257029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.257054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-26 23:04:33.257219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.257244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-26 23:04:33.257411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.257437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-26 23:04:33.257599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.257626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-26 23:04:33.257869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.257897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-26 23:04:33.258096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.258126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-26 23:04:33.258265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.258290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-26 23:04:33.258454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.258480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-26 23:04:33.258678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.258706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-26 23:04:33.258924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.258953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-26 23:04:33.259151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.259177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-26 23:04:33.259359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.259387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-26 23:04:33.259569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.259597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-26 23:04:33.259792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.259817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-26 23:04:33.259999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.260026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-26 23:04:33.260242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.260268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-26 23:04:33.260437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.260462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-26 23:04:33.260631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.260656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-26 23:04:33.260886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.260938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-26 23:04:33.261134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.261159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-26 23:04:33.261305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.261332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-26 23:04:33.261502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.261532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-26 23:04:33.261727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-26 23:04:33.261752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-26 23:04:33.261970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-26 23:04:33.261998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-26 23:04:33.262199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-26 23:04:33.262225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-26 23:04:33.262399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-26 23:04:33.262424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-26 23:04:33.262594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-26 23:04:33.262619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-26 23:04:33.262879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-26 23:04:33.262907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-26 23:04:33.263133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-26 23:04:33.263158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-26 23:04:33.263327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-26 23:04:33.263352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-26 23:04:33.263518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-26 23:04:33.263544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-26 23:04:33.263772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-26 23:04:33.263812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-26 23:04:33.263995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-26 23:04:33.264020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-26 23:04:33.264233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-26 23:04:33.264259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-26 23:04:33.264450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-26 23:04:33.264476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-26 23:04:33.264683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-26 23:04:33.264711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-26 23:04:33.264894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-26 23:04:33.264922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-26 23:04:33.265082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-26 23:04:33.265127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-26 23:04:33.265300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-26 23:04:33.265326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-26 23:04:33.265511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-26 23:04:33.265536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-26 23:04:33.265730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-26 23:04:33.265756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-26 23:04:33.265895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-26 23:04:33.265938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-26 23:04:33.266133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-26 23:04:33.266159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-26 23:04:33.266296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-26 23:04:33.266321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-26 23:04:33.266531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-26 23:04:33.266559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-26 23:04:33.266724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-26 23:04:33.266752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-26 23:04:33.266935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-26 23:04:33.266963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-26 23:04:33.267196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-26 23:04:33.267222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-26 23:04:33.267387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-26 23:04:33.267415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-26 23:04:33.267601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-26 23:04:33.267629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-26 23:04:33.267815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-26 23:04:33.267843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-26 23:04:33.268069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-26 23:04:33.268095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-26 23:04:33.268274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-26 23:04:33.268299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-26 23:04:33.268469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-26 23:04:33.268496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-26 23:04:33.268689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-26 23:04:33.268717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-26 23:04:33.268897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-26 23:04:33.268925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-26 23:04:33.269118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-26 23:04:33.269144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-26 23:04:33.269293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-26 23:04:33.269319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:41.107 [2024-07-26 23:04:33.269487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.269516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.269773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.269803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.270000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.270029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.270243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.270268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.270447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.270473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.270691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.270719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.270884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.270912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.271124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.271150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.271295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.271320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.271552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.271577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.271770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.271798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.271963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.272005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.272171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.272197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.272393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.272417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.272592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.272621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.272795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.272823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.273001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.273029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.273228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.273253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.273413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.273439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.273653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.273681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.273867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.273895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.274067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.274092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.274245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.274270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.274472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.274498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.274668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.274696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.274878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.274906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.275111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.275137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.275279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.275304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.275505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.275531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.275752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.275780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.275942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.275970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.276165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.276195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.276386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.276415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.276614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.276639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.276800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.276828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.277021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.277049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.277233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.277259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.277442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.277467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.277653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.277695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.277854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.277881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.108 [2024-07-26 23:04:33.278049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.108 [2024-07-26 23:04:33.278084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.108 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.278254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.109 [2024-07-26 23:04:33.278279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.109 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.278465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.109 [2024-07-26 23:04:33.278493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.109 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.278733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.109 [2024-07-26 23:04:33.278761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.109 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.278979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.109 [2024-07-26 23:04:33.279007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.109 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.279231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.109 [2024-07-26 23:04:33.279257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.109 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.279422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.109 [2024-07-26 23:04:33.279447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.109 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.279635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.109 [2024-07-26 23:04:33.279663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.109 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.279882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.109 [2024-07-26 23:04:33.279910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.109 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.280100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.109 [2024-07-26 23:04:33.280126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.109 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.280299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.109 [2024-07-26 23:04:33.280324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.109 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.280479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.109 [2024-07-26 23:04:33.280507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.109 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.280707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.109 [2024-07-26 23:04:33.280735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.109 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.280922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.109 [2024-07-26 23:04:33.280946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.109 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.281120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.109 [2024-07-26 23:04:33.281146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.109 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.281316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.109 [2024-07-26 23:04:33.281342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.109 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.281567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.109 [2024-07-26 23:04:33.281592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.109 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.281784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.109 [2024-07-26 23:04:33.281809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.109 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.282040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.109 [2024-07-26 23:04:33.282077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.109 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.282305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.109 [2024-07-26 23:04:33.282331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.109 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.282495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.109 [2024-07-26 23:04:33.282523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.109 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.282742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.109 [2024-07-26 23:04:33.282767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.109 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.282934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.109 [2024-07-26 23:04:33.282962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.109 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.283174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.109 [2024-07-26 23:04:33.283201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.109 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.283380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.109 [2024-07-26 23:04:33.283408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.109 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.283601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.109 [2024-07-26 23:04:33.283627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.109 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.283792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.109 [2024-07-26 23:04:33.283817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.109 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.283979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.109 [2024-07-26 23:04:33.284004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.109 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.284206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.109 [2024-07-26 23:04:33.284233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.109 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.284393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.109 [2024-07-26 23:04:33.284419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.109 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.284604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.109 [2024-07-26 23:04:33.284632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.109 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.284826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.109 [2024-07-26 23:04:33.284854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.109 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.285073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.109 [2024-07-26 23:04:33.285117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.109 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.285312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.109 [2024-07-26 23:04:33.285337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.109 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.285512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.109 [2024-07-26 23:04:33.285537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.109 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.285726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.109 [2024-07-26 23:04:33.285755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.109 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.285942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.109 [2024-07-26 23:04:33.285970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.109 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.286158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.109 [2024-07-26 23:04:33.286185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.109 qpair failed and we were unable to recover it. 00:34:41.109 [2024-07-26 23:04:33.286326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.286351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.110 [2024-07-26 23:04:33.286549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.286577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.110 [2024-07-26 23:04:33.286740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.286768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.110 [2024-07-26 23:04:33.286937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.286962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.110 [2024-07-26 23:04:33.287152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.287184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.110 [2024-07-26 23:04:33.287345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.287373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.110 [2024-07-26 23:04:33.287554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.287582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.110 [2024-07-26 23:04:33.287774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.287803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.110 [2024-07-26 23:04:33.287963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.287991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.110 [2024-07-26 23:04:33.288183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.288209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.110 [2024-07-26 23:04:33.288357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.288383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.110 [2024-07-26 23:04:33.288561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.288587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.110 [2024-07-26 23:04:33.288732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.288757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.110 [2024-07-26 23:04:33.288902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.288944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.110 [2024-07-26 23:04:33.289164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.289193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.110 [2024-07-26 23:04:33.289389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.289414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.110 [2024-07-26 23:04:33.289601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.289629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.110 [2024-07-26 23:04:33.289793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.289821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.110 [2024-07-26 23:04:33.290001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.290029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.110 [2024-07-26 23:04:33.290214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.290240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.110 [2024-07-26 23:04:33.290428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.290455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.110 [2024-07-26 23:04:33.290644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.290672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.110 [2024-07-26 23:04:33.290817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.290844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.110 [2024-07-26 23:04:33.291056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.291089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.110 [2024-07-26 23:04:33.291266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.291293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.110 [2024-07-26 23:04:33.291445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.291472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.110 [2024-07-26 23:04:33.291614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.291641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.110 [2024-07-26 23:04:33.291828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.291853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.110 [2024-07-26 23:04:33.292012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.292040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.110 [2024-07-26 23:04:33.292246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.292271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.110 [2024-07-26 23:04:33.292436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.292463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.110 [2024-07-26 23:04:33.292627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.292654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.110 [2024-07-26 23:04:33.292828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.292854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.110 [2024-07-26 23:04:33.292993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.293019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.110 [2024-07-26 23:04:33.293218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.293245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.110 [2024-07-26 23:04:33.293426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.293452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.110 [2024-07-26 23:04:33.293594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.293636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.110 [2024-07-26 23:04:33.293816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.110 [2024-07-26 23:04:33.293842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.110 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.293989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.294016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.294184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.294211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.294389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.294415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.294586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.294611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.294811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.294836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.294971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.294996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.295148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.295175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.295317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.295342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.295514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.295540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.295700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.295726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.295897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.295926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.296084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.296111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.296279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.296305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.296453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.296478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.296656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.296681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.296925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.296950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.297145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.297171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.297334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.297359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.297498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.297523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.297693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.297719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.297858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.297883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.298128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.298154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.298400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.298426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.298619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.298644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.298827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.298852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.298994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.299020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.299169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.299194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.299345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.299370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.299542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.299567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.299710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.299735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.299908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.299933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.300131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.300156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.300327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.300352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.300518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.300543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.300681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.300706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.300868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.300893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.301090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.301116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.111 [2024-07-26 23:04:33.301259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.111 [2024-07-26 23:04:33.301288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.111 qpair failed and we were unable to recover it. 00:34:41.112 [2024-07-26 23:04:33.301471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.112 [2024-07-26 23:04:33.301496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.112 qpair failed and we were unable to recover it. 00:34:41.112 [2024-07-26 23:04:33.301663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.112 [2024-07-26 23:04:33.301688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.112 qpair failed and we were unable to recover it. 00:34:41.112 [2024-07-26 23:04:33.301853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.112 [2024-07-26 23:04:33.301878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.112 qpair failed and we were unable to recover it. 00:34:41.112 [2024-07-26 23:04:33.302069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.112 [2024-07-26 23:04:33.302094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.112 qpair failed and we were unable to recover it. 00:34:41.112 [2024-07-26 23:04:33.302265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.112 [2024-07-26 23:04:33.302290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.112 qpair failed and we were unable to recover it. 00:34:41.112 [2024-07-26 23:04:33.302454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.112 [2024-07-26 23:04:33.302479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.112 qpair failed and we were unable to recover it. 00:34:41.112 [2024-07-26 23:04:33.302653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.112 [2024-07-26 23:04:33.302678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.112 qpair failed and we were unable to recover it. 00:34:41.112 [2024-07-26 23:04:33.302870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.112 [2024-07-26 23:04:33.302895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.112 qpair failed and we were unable to recover it. 00:34:41.112 [2024-07-26 23:04:33.303062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.112 [2024-07-26 23:04:33.303088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.112 qpair failed and we were unable to recover it. 00:34:41.112 [2024-07-26 23:04:33.303260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.112 [2024-07-26 23:04:33.303285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.112 qpair failed and we were unable to recover it. 00:34:41.112 [2024-07-26 23:04:33.303482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.112 [2024-07-26 23:04:33.303507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.112 qpair failed and we were unable to recover it. 00:34:41.112 [2024-07-26 23:04:33.303672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.112 [2024-07-26 23:04:33.303698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.112 qpair failed and we were unable to recover it. 00:34:41.112 [2024-07-26 23:04:33.303894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.112 [2024-07-26 23:04:33.303919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.112 qpair failed and we were unable to recover it. 00:34:41.112 [2024-07-26 23:04:33.304068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.112 [2024-07-26 23:04:33.304093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.112 qpair failed and we were unable to recover it. 00:34:41.112 [2024-07-26 23:04:33.304232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.112 [2024-07-26 23:04:33.304257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.112 qpair failed and we were unable to recover it. 00:34:41.112 [2024-07-26 23:04:33.304402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.112 [2024-07-26 23:04:33.304427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.112 qpair failed and we were unable to recover it. 00:34:41.112 [2024-07-26 23:04:33.304608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.112 [2024-07-26 23:04:33.304633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.112 qpair failed and we were unable to recover it. 00:34:41.112 [2024-07-26 23:04:33.304778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.112 [2024-07-26 23:04:33.304803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.112 qpair failed and we were unable to recover it. 00:34:41.112 [2024-07-26 23:04:33.304957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.112 [2024-07-26 23:04:33.304981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.112 qpair failed and we were unable to recover it. 00:34:41.112 [2024-07-26 23:04:33.305156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.112 [2024-07-26 23:04:33.305182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.112 qpair failed and we were unable to recover it. 00:34:41.112 [2024-07-26 23:04:33.305348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.112 [2024-07-26 23:04:33.305373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.112 qpair failed and we were unable to recover it. 00:34:41.112 [2024-07-26 23:04:33.305535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.112 [2024-07-26 23:04:33.305560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.112 qpair failed and we were unable to recover it. 00:34:41.112 [2024-07-26 23:04:33.305755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.112 [2024-07-26 23:04:33.305780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.112 qpair failed and we were unable to recover it. 00:34:41.112 [2024-07-26 23:04:33.305981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.112 [2024-07-26 23:04:33.306007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.112 qpair failed and we were unable to recover it. 00:34:41.112 [2024-07-26 23:04:33.306146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.112 [2024-07-26 23:04:33.306171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.112 qpair failed and we were unable to recover it. 00:34:41.112 [2024-07-26 23:04:33.306307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.112 [2024-07-26 23:04:33.306333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.112 qpair failed and we were unable to recover it. 00:34:41.112 [2024-07-26 23:04:33.306501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.112 [2024-07-26 23:04:33.306530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.112 qpair failed and we were unable to recover it. 00:34:41.112 [2024-07-26 23:04:33.306682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.112 [2024-07-26 23:04:33.306708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.112 qpair failed and we were unable to recover it. 00:34:41.112 [2024-07-26 23:04:33.306852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.112 [2024-07-26 23:04:33.306877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.112 qpair failed and we were unable to recover it. 00:34:41.112 [2024-07-26 23:04:33.307027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.112 [2024-07-26 23:04:33.307052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.112 qpair failed and we were unable to recover it. 00:34:41.112 [2024-07-26 23:04:33.307230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.112 [2024-07-26 23:04:33.307255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.112 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.307396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.113 [2024-07-26 23:04:33.307422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.113 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.307614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.113 [2024-07-26 23:04:33.307639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.113 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.307782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.113 [2024-07-26 23:04:33.307808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.113 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.307967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.113 [2024-07-26 23:04:33.307992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.113 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.308173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.113 [2024-07-26 23:04:33.308198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.113 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.308348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.113 [2024-07-26 23:04:33.308373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.113 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.308537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.113 [2024-07-26 23:04:33.308563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.113 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.308758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.113 [2024-07-26 23:04:33.308783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.113 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.308954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.113 [2024-07-26 23:04:33.308979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.113 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.309181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.113 [2024-07-26 23:04:33.309207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.113 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.309378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.113 [2024-07-26 23:04:33.309404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.113 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.309543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.113 [2024-07-26 23:04:33.309568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.113 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.309735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.113 [2024-07-26 23:04:33.309759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.113 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.309913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.113 [2024-07-26 23:04:33.309938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.113 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.310116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.113 [2024-07-26 23:04:33.310143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.113 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.310316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.113 [2024-07-26 23:04:33.310341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.113 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.310541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.113 [2024-07-26 23:04:33.310567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.113 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.310748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.113 [2024-07-26 23:04:33.310774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.113 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.310919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.113 [2024-07-26 23:04:33.310943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.113 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.311117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.113 [2024-07-26 23:04:33.311142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.113 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.311312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.113 [2024-07-26 23:04:33.311337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.113 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.311501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.113 [2024-07-26 23:04:33.311526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.113 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.311704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.113 [2024-07-26 23:04:33.311729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.113 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.311874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.113 [2024-07-26 23:04:33.311899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.113 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.312101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.113 [2024-07-26 23:04:33.312126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.113 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.312297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.113 [2024-07-26 23:04:33.312322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.113 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.312458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.113 [2024-07-26 23:04:33.312483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.113 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.312674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.113 [2024-07-26 23:04:33.312699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.113 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.312840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.113 [2024-07-26 23:04:33.312864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.113 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.313035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.113 [2024-07-26 23:04:33.313066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.113 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.313208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.113 [2024-07-26 23:04:33.313234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.113 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.313423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.113 [2024-07-26 23:04:33.313448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.113 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.313644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.113 [2024-07-26 23:04:33.313670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.113 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.313832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.113 [2024-07-26 23:04:33.313857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.113 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.314039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.113 [2024-07-26 23:04:33.314071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.113 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.314217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.113 [2024-07-26 23:04:33.314243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.113 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.314414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.113 [2024-07-26 23:04:33.314439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.113 qpair failed and we were unable to recover it. 00:34:41.113 [2024-07-26 23:04:33.314630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.314656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.314821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.314846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.315020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.315047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.315193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.315219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.315392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.315417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.315585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.315610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.315788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.315813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.315980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.316005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.316202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.316228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.316381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.316406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.316600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.316626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.316818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.316843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.317025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.317050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.317215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.317240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.317385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.317411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.317557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.317583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.317724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.317749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.317923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.317948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.318114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.318139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.318311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.318337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.318485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.318510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.318678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.318703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.318849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.318874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.319073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.319099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.319280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.319306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.319453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.319478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.319622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.319653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.319836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.319862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.320032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.320064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.320232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.320257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.320403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.320428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.320594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.320619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.320788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.320813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.320948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.320973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.321143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.321169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.321365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.321390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.321557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.321582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.321723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.321748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.114 qpair failed and we were unable to recover it. 00:34:41.114 [2024-07-26 23:04:33.321929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.114 [2024-07-26 23:04:33.321954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.322118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.322144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.322303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.322328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.322464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.322489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.322663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.322688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.322857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.322883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.323054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.323085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.323244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.323270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.323452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.323477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.323649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.323674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.323816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.323841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.324014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.324039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.324213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.324238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.324432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.324457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.324627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.324652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.324815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.324844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.325015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.325040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.325214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.325239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.325410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.325435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.325581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.325606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.325747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.325772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.325923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.325948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.326095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.326121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.326300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.326325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.326470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.326495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.326658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.326683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.326877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.326902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.327075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.327100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.327268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.327293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.327492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.327517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.327686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.327711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.327880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.327905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.328074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.328100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.328247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.328273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.328444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.328469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.328607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.328633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.328826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.328851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.329018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.329043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.115 qpair failed and we were unable to recover it. 00:34:41.115 [2024-07-26 23:04:33.329222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.115 [2024-07-26 23:04:33.329247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.329387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.329412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.329601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.329626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.329763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.329788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.329931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.329959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.330139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.330165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.330333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.330358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.330493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.330519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.330693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.330719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.330899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.330924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.331133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.331159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.331349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.331374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.331513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.331539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.331717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.331742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.331880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.331904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.332100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.332125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.332277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.332303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.332475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.332500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.332651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.332676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.332838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.332863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.333047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.333077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.333220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.333245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.333392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.333417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.333582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.333607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.333785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.333810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.333952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.333977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.334148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.334174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.334325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.334351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.334494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.334519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.334664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.334689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.334855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.334880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.335023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.335048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.335232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.335258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.335453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.335478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.335651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.335676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.335839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.335864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.336069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.336094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.336234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.336260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.116 qpair failed and we were unable to recover it. 00:34:41.116 [2024-07-26 23:04:33.336436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.116 [2024-07-26 23:04:33.336462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.117 qpair failed and we were unable to recover it. 00:34:41.117 [2024-07-26 23:04:33.336629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.117 [2024-07-26 23:04:33.336654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.117 qpair failed and we were unable to recover it. 00:34:41.117 [2024-07-26 23:04:33.336800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.117 [2024-07-26 23:04:33.336825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.117 qpair failed and we were unable to recover it. 00:34:41.117 [2024-07-26 23:04:33.336964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.117 [2024-07-26 23:04:33.336989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.117 qpair failed and we were unable to recover it. 00:34:41.117 [2024-07-26 23:04:33.337161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.117 [2024-07-26 23:04:33.337187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.117 qpair failed and we were unable to recover it. 00:34:41.117 [2024-07-26 23:04:33.337355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.117 [2024-07-26 23:04:33.337381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.117 qpair failed and we were unable to recover it. 00:34:41.117 [2024-07-26 23:04:33.337578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.117 [2024-07-26 23:04:33.337603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.117 qpair failed and we were unable to recover it. 00:34:41.117 [2024-07-26 23:04:33.337743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.117 [2024-07-26 23:04:33.337772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.117 qpair failed and we were unable to recover it. 00:34:41.117 [2024-07-26 23:04:33.337967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.117 [2024-07-26 23:04:33.337992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.117 qpair failed and we were unable to recover it. 00:34:41.117 [2024-07-26 23:04:33.338168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.117 [2024-07-26 23:04:33.338194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.117 qpair failed and we were unable to recover it. 00:34:41.117 [2024-07-26 23:04:33.338365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.117 [2024-07-26 23:04:33.338390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.117 qpair failed and we were unable to recover it. 00:34:41.117 [2024-07-26 23:04:33.338536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.117 [2024-07-26 23:04:33.338561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.117 qpair failed and we were unable to recover it. 00:34:41.117 [2024-07-26 23:04:33.338700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.117 [2024-07-26 23:04:33.338726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.117 qpair failed and we were unable to recover it. 00:34:41.117 [2024-07-26 23:04:33.338895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.117 [2024-07-26 23:04:33.338920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.117 qpair failed and we were unable to recover it. 00:34:41.117 [2024-07-26 23:04:33.339098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.117 [2024-07-26 23:04:33.339124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.117 qpair failed and we were unable to recover it. 00:34:41.117 [2024-07-26 23:04:33.339278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.117 [2024-07-26 23:04:33.339304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.117 qpair failed and we were unable to recover it. 00:34:41.117 [2024-07-26 23:04:33.339467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.117 [2024-07-26 23:04:33.339492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.117 qpair failed and we were unable to recover it. 00:34:41.117 [2024-07-26 23:04:33.339660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.117 [2024-07-26 23:04:33.339685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.117 qpair failed and we were unable to recover it. 00:34:41.117 [2024-07-26 23:04:33.339821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.117 [2024-07-26 23:04:33.339846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.117 qpair failed and we were unable to recover it. 00:34:41.117 [2024-07-26 23:04:33.339983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.117 [2024-07-26 23:04:33.340008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.117 qpair failed and we were unable to recover it. 00:34:41.117 [2024-07-26 23:04:33.340162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.117 [2024-07-26 23:04:33.340187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.117 qpair failed and we were unable to recover it. 00:34:41.117 [2024-07-26 23:04:33.340355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.117 [2024-07-26 23:04:33.340380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.117 qpair failed and we were unable to recover it. 00:34:41.117 [2024-07-26 23:04:33.340525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.117 [2024-07-26 23:04:33.340550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.117 qpair failed and we were unable to recover it. 00:34:41.117 [2024-07-26 23:04:33.340693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.117 [2024-07-26 23:04:33.340719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.117 qpair failed and we were unable to recover it. 00:34:41.117 [2024-07-26 23:04:33.340851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.117 [2024-07-26 23:04:33.340877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.117 qpair failed and we were unable to recover it. 00:34:41.117 [2024-07-26 23:04:33.341022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.117 [2024-07-26 23:04:33.341046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.117 qpair failed and we were unable to recover it. 00:34:41.117 [2024-07-26 23:04:33.341205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.117 [2024-07-26 23:04:33.341232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.117 qpair failed and we were unable to recover it. 00:34:41.117 [2024-07-26 23:04:33.341378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.117 [2024-07-26 23:04:33.341403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.117 qpair failed and we were unable to recover it. 00:34:41.117 [2024-07-26 23:04:33.341600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.117 [2024-07-26 23:04:33.341625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.117 qpair failed and we were unable to recover it. 00:34:41.117 [2024-07-26 23:04:33.341764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.117 [2024-07-26 23:04:33.341790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.117 qpair failed and we were unable to recover it. 00:34:41.117 [2024-07-26 23:04:33.341986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.117 [2024-07-26 23:04:33.342011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.117 qpair failed and we were unable to recover it. 00:34:41.117 [2024-07-26 23:04:33.342160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.117 [2024-07-26 23:04:33.342186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.117 qpair failed and we were unable to recover it. 00:34:41.117 [2024-07-26 23:04:33.342323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.117 [2024-07-26 23:04:33.342348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.117 qpair failed and we were unable to recover it. 00:34:41.117 [2024-07-26 23:04:33.342517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.342543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.342694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.342724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.342918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.342943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.343094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.343120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.343259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.343284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.343456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.343481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.343652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.343677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.343845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.343870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.344013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.344038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.344246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.344272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.344445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.344470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.344666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.344691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.344838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.344863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.345026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.345051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.345205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.345230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.345379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.345404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.345538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.345564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.345733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.345758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.345908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.345934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.346130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.346156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.346324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.346350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.346484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.346510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.346649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.346675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.346872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.346898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.347072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.347098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.347244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.347270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.347434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.347460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.347635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.347661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.347824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.347854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.347997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.348022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.348200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.348226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.348408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.348433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.348590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.348615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.348804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.348830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.348975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.349000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.349174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.349199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.349395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.349421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.349566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.118 [2024-07-26 23:04:33.349593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.118 qpair failed and we were unable to recover it. 00:34:41.118 [2024-07-26 23:04:33.349767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.349792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.349954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.349980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.350147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.350174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.350322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.350348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.350496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.350522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.350688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.350713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.350894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.350919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.351056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.351087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.351278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.351304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.351495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.351520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.351695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.351721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.351868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.351893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.352032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.352057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.352215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.352241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.352409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.352434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.352630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.352656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.352832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.352857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.353000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.353026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.353180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.353206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.353382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.353407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.353581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.353607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.353799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.353825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.354001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.354026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.354211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.354237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.354417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.354443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.354640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.354665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.354808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.354834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.355005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.355030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.355224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.355252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.355388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.355414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.355581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.355607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.355786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.355811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.355978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.356004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.356148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.356174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.356324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.356350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.356513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.356539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.356706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.356732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.356880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.119 [2024-07-26 23:04:33.356905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.119 qpair failed and we were unable to recover it. 00:34:41.119 [2024-07-26 23:04:33.357080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.357116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.357254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.357279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.357473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.357498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.357672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.357698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.357832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.357857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.358026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.358052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.358226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.358251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.358431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.358456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.358702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.358727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.358896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.358922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.359118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.359144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.359314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.359339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.359474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.359499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.359696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.359722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.359891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.359916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.360053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.360087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.360265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.360290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.360458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.360484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.360657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.360683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.360831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.360856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.361102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.361133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.361288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.361314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.361480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.361505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.361709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.361735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.361902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.361927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.362092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.362118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.362364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.362390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.362558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.362584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.362753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.362778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.362950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.362975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.363148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.363174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.363322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.363347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.363519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.363544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.363708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.363734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.363913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.363939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.364087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.364113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.364278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.364303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.364445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.120 [2024-07-26 23:04:33.364470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.120 qpair failed and we were unable to recover it. 00:34:41.120 [2024-07-26 23:04:33.364715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.121 [2024-07-26 23:04:33.364741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.121 qpair failed and we were unable to recover it. 00:34:41.121 [2024-07-26 23:04:33.364987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.121 [2024-07-26 23:04:33.365012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.121 qpair failed and we were unable to recover it. 00:34:41.121 [2024-07-26 23:04:33.365218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.121 [2024-07-26 23:04:33.365244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.121 qpair failed and we were unable to recover it. 00:34:41.121 [2024-07-26 23:04:33.365395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.121 [2024-07-26 23:04:33.365421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.121 qpair failed and we were unable to recover it. 00:34:41.121 [2024-07-26 23:04:33.365622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.121 [2024-07-26 23:04:33.365647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.121 qpair failed and we were unable to recover it. 00:34:41.121 [2024-07-26 23:04:33.365790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.121 [2024-07-26 23:04:33.365817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.121 qpair failed and we were unable to recover it. 00:34:41.121 [2024-07-26 23:04:33.366012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.121 [2024-07-26 23:04:33.366037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.121 qpair failed and we were unable to recover it. 00:34:41.121 [2024-07-26 23:04:33.366235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.121 [2024-07-26 23:04:33.366272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:41.121 qpair failed and we were unable to recover it. 00:34:41.121 [2024-07-26 23:04:33.366445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.121 [2024-07-26 23:04:33.366475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:41.121 qpair failed and we were unable to recover it. 00:34:41.121 [2024-07-26 23:04:33.366688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.121 [2024-07-26 23:04:33.366723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:41.121 qpair failed and we were unable to recover it. 00:34:41.121 [2024-07-26 23:04:33.366915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.121 [2024-07-26 23:04:33.366945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:41.121 qpair failed and we were unable to recover it. 00:34:41.121 [2024-07-26 23:04:33.367112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.121 [2024-07-26 23:04:33.367138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.121 qpair failed and we were unable to recover it. 00:34:41.121 [2024-07-26 23:04:33.367282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.121 [2024-07-26 23:04:33.367308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.121 qpair failed and we were unable to recover it. 00:34:41.121 [2024-07-26 23:04:33.367475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.121 [2024-07-26 23:04:33.367501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.121 qpair failed and we were unable to recover it. 00:34:41.121 [2024-07-26 23:04:33.367699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.121 [2024-07-26 23:04:33.367724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.121 qpair failed and we were unable to recover it. 00:34:41.121 [2024-07-26 23:04:33.367897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.121 [2024-07-26 23:04:33.367922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.121 qpair failed and we were unable to recover it. 00:34:41.121 [2024-07-26 23:04:33.368115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.121 [2024-07-26 23:04:33.368141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.121 qpair failed and we were unable to recover it. 00:34:41.121 [2024-07-26 23:04:33.368285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.121 [2024-07-26 23:04:33.368310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.121 qpair failed and we were unable to recover it. 00:34:41.121 [2024-07-26 23:04:33.368484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.121 [2024-07-26 23:04:33.368509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.121 qpair failed and we were unable to recover it. 00:34:41.121 [2024-07-26 23:04:33.368679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.121 [2024-07-26 23:04:33.368704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.121 qpair failed and we were unable to recover it. 00:34:41.121 [2024-07-26 23:04:33.368873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.121 [2024-07-26 23:04:33.368898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.121 qpair failed and we were unable to recover it. 00:34:41.121 [2024-07-26 23:04:33.369042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.121 [2024-07-26 23:04:33.369075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.121 qpair failed and we were unable to recover it. 00:34:41.121 [2024-07-26 23:04:33.369219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.121 [2024-07-26 23:04:33.369244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.121 qpair failed and we were unable to recover it. 00:34:41.121 [2024-07-26 23:04:33.369440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.121 [2024-07-26 23:04:33.369466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.121 qpair failed and we were unable to recover it. 00:34:41.121 [2024-07-26 23:04:33.369602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.121 [2024-07-26 23:04:33.369627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.121 qpair failed and we were unable to recover it. 00:34:41.121 [2024-07-26 23:04:33.369762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.121 [2024-07-26 23:04:33.369787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.121 qpair failed and we were unable to recover it. 00:34:41.121 [2024-07-26 23:04:33.370031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.121 [2024-07-26 23:04:33.370056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.121 qpair failed and we were unable to recover it. 00:34:41.121 [2024-07-26 23:04:33.370209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.121 [2024-07-26 23:04:33.370234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.121 qpair failed and we were unable to recover it. 00:34:41.121 [2024-07-26 23:04:33.370385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.121 [2024-07-26 23:04:33.370410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.121 qpair failed and we were unable to recover it. 00:34:41.121 [2024-07-26 23:04:33.370559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.121 [2024-07-26 23:04:33.370584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.121 qpair failed and we were unable to recover it. 00:34:41.121 [2024-07-26 23:04:33.370726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.121 [2024-07-26 23:04:33.370751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.121 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.370945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.122 [2024-07-26 23:04:33.370970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.122 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.371119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.122 [2024-07-26 23:04:33.371172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.122 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.371331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.122 [2024-07-26 23:04:33.371356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.122 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.371528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.122 [2024-07-26 23:04:33.371553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.122 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.371693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.122 [2024-07-26 23:04:33.371717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.122 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.371922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.122 [2024-07-26 23:04:33.371952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.122 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.372127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.122 [2024-07-26 23:04:33.372154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.122 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.372288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.122 [2024-07-26 23:04:33.372313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.122 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.372484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.122 [2024-07-26 23:04:33.372509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.122 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.372683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.122 [2024-07-26 23:04:33.372708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.122 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.372850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.122 [2024-07-26 23:04:33.372875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.122 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.373070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.122 [2024-07-26 23:04:33.373095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.122 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.373271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.122 [2024-07-26 23:04:33.373296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.122 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.373490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.122 [2024-07-26 23:04:33.373515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.122 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.373685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.122 [2024-07-26 23:04:33.373710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.122 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.373915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.122 [2024-07-26 23:04:33.373940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.122 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.374111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.122 [2024-07-26 23:04:33.374137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.122 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.374289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.122 [2024-07-26 23:04:33.374314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.122 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.374487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.122 [2024-07-26 23:04:33.374512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.122 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.374691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.122 [2024-07-26 23:04:33.374717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.122 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.374853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.122 [2024-07-26 23:04:33.374879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.122 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.375050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.122 [2024-07-26 23:04:33.375083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.122 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.375331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.122 [2024-07-26 23:04:33.375356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.122 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.375508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.122 [2024-07-26 23:04:33.375533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.122 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.375706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.122 [2024-07-26 23:04:33.375731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.122 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.375906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.122 [2024-07-26 23:04:33.375931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.122 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.376107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.122 [2024-07-26 23:04:33.376133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.122 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.376334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.122 [2024-07-26 23:04:33.376360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.122 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.376501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.122 [2024-07-26 23:04:33.376526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.122 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.376721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.122 [2024-07-26 23:04:33.376746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.122 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.376939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.122 [2024-07-26 23:04:33.376964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.122 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.377134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.122 [2024-07-26 23:04:33.377160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.122 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.377323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.122 [2024-07-26 23:04:33.377348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.122 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.377489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.122 [2024-07-26 23:04:33.377514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.122 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.377651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.122 [2024-07-26 23:04:33.377676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.122 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.377923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.122 [2024-07-26 23:04:33.377948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.122 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.378127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.122 [2024-07-26 23:04:33.378153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.122 qpair failed and we were unable to recover it. 00:34:41.122 [2024-07-26 23:04:33.378301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.378326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.378525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.378550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.378692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.378716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.378865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.378890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.379037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.379081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.379259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.379284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.379454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.379479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.379649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.379674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.379839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.379864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.380010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.380035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.380192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.380218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.380464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.380489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.380657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.380682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.380856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.380882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.381050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.381083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.381231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.381256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.381397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.381422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.381592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.381617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.381783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.381808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.381958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.381985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.382137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.382163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.382309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.382334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.382503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.382528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.382674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.382699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.382865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.382890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.383057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.383089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.383273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.383299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.383466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.383491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.383634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.383659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.383830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.383856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.384024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.384049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.384240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.384265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.384459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.384484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.384654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.384679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.384844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.384869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.385012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.385037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.385223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.385252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.385393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.123 [2024-07-26 23:04:33.385418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.123 qpair failed and we were unable to recover it. 00:34:41.123 [2024-07-26 23:04:33.385564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.385590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.385784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.385809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.385979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.386004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.386180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.386205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.386348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.386373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.386542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.386567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.386709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.386734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.386902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.386927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.387095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.387126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.387276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.387301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.387469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.387494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.387633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.387659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.387845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.387870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.388071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.388096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.388246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.388271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.388411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.388437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.388613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.388637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.388830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.388855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.389022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.389047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.389203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.389228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.389373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.389398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.389560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.389585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.389753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.389778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.389944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.389969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.390170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.390196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.390377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.390407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.390558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.390583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.390726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.390751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.390920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.390944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.391081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.391107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.391255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.391280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.391425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.391450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.391597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.391622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.391821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.391846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.392012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.392037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.392197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.392222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.392388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.392414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.392609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.124 [2024-07-26 23:04:33.392634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.124 qpair failed and we were unable to recover it. 00:34:41.124 [2024-07-26 23:04:33.392805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.392830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-26 23:04:33.393006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.393031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-26 23:04:33.393209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.393234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-26 23:04:33.393403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.393429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-26 23:04:33.393628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.393653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-26 23:04:33.393800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.393827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-26 23:04:33.394029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.394054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-26 23:04:33.394214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.394240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-26 23:04:33.394453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.394478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-26 23:04:33.394674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.394699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-26 23:04:33.394874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.394900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-26 23:04:33.395035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.395067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-26 23:04:33.395257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.395282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-26 23:04:33.395445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.395470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-26 23:04:33.395617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.395642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-26 23:04:33.395848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.395873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-26 23:04:33.396042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.396085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-26 23:04:33.396252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.396277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-26 23:04:33.396473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.396499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-26 23:04:33.396645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.396670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-26 23:04:33.396818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.396843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-26 23:04:33.397010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.397035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-26 23:04:33.397216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.397242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-26 23:04:33.397411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.397436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-26 23:04:33.397583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.397608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-26 23:04:33.397763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.397788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-26 23:04:33.397926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.397952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-26 23:04:33.398132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.398158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-26 23:04:33.398336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.398362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-26 23:04:33.398535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.398560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-26 23:04:33.398752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.398777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-26 23:04:33.398940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.398964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-26 23:04:33.399136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.399161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-26 23:04:33.399333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.399358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-26 23:04:33.399555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.399580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-26 23:04:33.399711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.399736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-26 23:04:33.399881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.399906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-26 23:04:33.400081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.125 [2024-07-26 23:04:33.400107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-26 23:04:33.400257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.126 [2024-07-26 23:04:33.400282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-26 23:04:33.400433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.126 [2024-07-26 23:04:33.400460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-26 23:04:33.400632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.126 [2024-07-26 23:04:33.400657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-26 23:04:33.400797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.126 [2024-07-26 23:04:33.400822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-26 23:04:33.400997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.126 [2024-07-26 23:04:33.401023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-26 23:04:33.401231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.126 [2024-07-26 23:04:33.401257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-26 23:04:33.401432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.126 [2024-07-26 23:04:33.401457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-26 23:04:33.401600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.126 [2024-07-26 23:04:33.401625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-26 23:04:33.401789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.126 [2024-07-26 23:04:33.401814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-26 23:04:33.402009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.126 [2024-07-26 23:04:33.402034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-26 23:04:33.402228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.126 [2024-07-26 23:04:33.402254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-26 23:04:33.402427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.126 [2024-07-26 23:04:33.402452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-26 23:04:33.402626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.126 [2024-07-26 23:04:33.402651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-26 23:04:33.402823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.126 [2024-07-26 23:04:33.402848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-26 23:04:33.403019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.126 [2024-07-26 23:04:33.403044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-26 23:04:33.403206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.126 [2024-07-26 23:04:33.403231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-26 23:04:33.403374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.126 [2024-07-26 23:04:33.403399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-26 23:04:33.403574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.126 [2024-07-26 23:04:33.403603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-26 23:04:33.403798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.126 [2024-07-26 23:04:33.403822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-26 23:04:33.403992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.126 [2024-07-26 23:04:33.404017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-26 23:04:33.404199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.126 [2024-07-26 23:04:33.404225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-26 23:04:33.404363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.126 [2024-07-26 23:04:33.404388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-26 23:04:33.404556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.126 [2024-07-26 23:04:33.404581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-26 23:04:33.404749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.126 [2024-07-26 23:04:33.404774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-26 23:04:33.404936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.126 [2024-07-26 23:04:33.404961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-26 23:04:33.405128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.126 [2024-07-26 23:04:33.405154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-26 23:04:33.405291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.126 [2024-07-26 23:04:33.405316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-26 23:04:33.405488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.126 [2024-07-26 23:04:33.405513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-26 23:04:33.405681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.126 [2024-07-26 23:04:33.405706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-26 23:04:33.405953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.126 [2024-07-26 23:04:33.405978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-26 23:04:33.406149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.126 [2024-07-26 23:04:33.406175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-26 23:04:33.406344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.126 [2024-07-26 23:04:33.406369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-26 23:04:33.406530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.126 [2024-07-26 23:04:33.406555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.406726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.406750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.406950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.406975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.407147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.407172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.407338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.407363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.407530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.407554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.407726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.407751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.407925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.407950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.408123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.408148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.408298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.408325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.408471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.408496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.408679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.408704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.408894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.408923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.409093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.409119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.409255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.409279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.409474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.409499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.409690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.409715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.409885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.409911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.410049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.410081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.410247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.410272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.410415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.410440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.410616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.410641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.410804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.410829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.411003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.411028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.411216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.411242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.411419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.411444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.411653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.411678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.411852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.411878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.412023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.412048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.412243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.412268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.412414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.412439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.412586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.412611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.412779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.412804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.412977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.413002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.413178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.413204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.413372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.413398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.413592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.413617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.413809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.413834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-26 23:04:33.414010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-26 23:04:33.414035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.414214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.414244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.414402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.414427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.414596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.414621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.414785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.414810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.414950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.414975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.415142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.415168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.415359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.415384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.415563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.415588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.415778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.415803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.415937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.415962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.416133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.416159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.416328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.416353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.416493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.416519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.416678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.416704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.416855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.416880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.417056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.417088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.417286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.417312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.417459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.417484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.417655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.417681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.417862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.417887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.418070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.418095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.418275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.418300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.418444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.418469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.418610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.418635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.418808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.418833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.419003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.419028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.419250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.419276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.419443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.419468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.419614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.419639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.419781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.419808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.419975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.420000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.420142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.420169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.420333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.420358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.420506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.420531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.420696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.420721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.420892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.420917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.421084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.421110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-26 23:04:33.421258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-26 23:04:33.421283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.421447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.421472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.421609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.421635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.421783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.421809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.421979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.422004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.422174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.422200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.422370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.422395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.422572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.422597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.422763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.422788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.422954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.422979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.423146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.423172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.423345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.423370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.423516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.423542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.423681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.423706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.423901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.423926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.424071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.424097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.424262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.424288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.424457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.424482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.424653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.424679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.424870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.424896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.425070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.425096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.425235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.425260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.425402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.425427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.425625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.425650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.425826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.425852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.425985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.426010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.426202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.426228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.426401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.426426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.426595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.426620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.426810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.426835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.427004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.427029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.427184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.427213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.427364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.427389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.427531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.427556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.427733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.427758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.427932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.427957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.428106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.428132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.428329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.428354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-26 23:04:33.428491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-26 23:04:33.428517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.428686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.428711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.428875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.428900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.429079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.429105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.429244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.429269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.429433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.429458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.429633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.429658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.429829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.429855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.430024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.430050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.430199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.430224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.430389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.430414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.430586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.430611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.430747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.430772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.430966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.430992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.431155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.431180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.431341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.431367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.431505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.431530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.431669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.431694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.431860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.431885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.432070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.432096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.432262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.432292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.432459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.432484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.432648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.432673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.432845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.432871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.433047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.433079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.433250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.433275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.433441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.433467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.433604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.433630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.433795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.433820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.433995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.434022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.434179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.434205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.434380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.434406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.434601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.434626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.434793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.434819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.434999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.435024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.435206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.435232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.435380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.435405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.435543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.435568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-26 23:04:33.435712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-26 23:04:33.435738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-26 23:04:33.435906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-26 23:04:33.435931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-26 23:04:33.436123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-26 23:04:33.436149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-26 23:04:33.436295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-26 23:04:33.436319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-26 23:04:33.436484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-26 23:04:33.436509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-26 23:04:33.436686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-26 23:04:33.436711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-26 23:04:33.436851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-26 23:04:33.436876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-26 23:04:33.437052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-26 23:04:33.437083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-26 23:04:33.437217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-26 23:04:33.437243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-26 23:04:33.437440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-26 23:04:33.437465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-26 23:04:33.437612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-26 23:04:33.437639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-26 23:04:33.437781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-26 23:04:33.437808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-26 23:04:33.438006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-26 23:04:33.438031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-26 23:04:33.438235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-26 23:04:33.438260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-26 23:04:33.438423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-26 23:04:33.438448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-26 23:04:33.438615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-26 23:04:33.438640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-26 23:04:33.438783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-26 23:04:33.438808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-26 23:04:33.438990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-26 23:04:33.439015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-26 23:04:33.439206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-26 23:04:33.439232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-26 23:04:33.439397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-26 23:04:33.439432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-26 23:04:33.439579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-26 23:04:33.439605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-26 23:04:33.439785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-26 23:04:33.439810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-26 23:04:33.439952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-26 23:04:33.439977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-26 23:04:33.440150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-26 23:04:33.440176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-26 23:04:33.440347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-26 23:04:33.440373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-26 23:04:33.440515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-26 23:04:33.440540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-26 23:04:33.440712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-26 23:04:33.440738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-26 23:04:33.440894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-26 23:04:33.440919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-26 23:04:33.441093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-26 23:04:33.441121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-26 23:04:33.441268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-26 23:04:33.441293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-26 23:04:33.441489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-26 23:04:33.441515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-26 23:04:33.441678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-26 23:04:33.441704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-26 23:04:33.441875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-26 23:04:33.441900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-26 23:04:33.442046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-26 23:04:33.442079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.442251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.442277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.442446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.442473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.442643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.442669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.442856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.442894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.443104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.443130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.443311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.443336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.443529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.443554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.443748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.443773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.443910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.443935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.444109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.444135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.444316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.444341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.444491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.444518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.444693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.444719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.444882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.444907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.445075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.445101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.445267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.445292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.445497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.445526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.445695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.445720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.445896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.445921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.446095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.446121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.446301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.446326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.446490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.446515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.446659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.446686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.446855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.446880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.447025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.447050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.447232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.447257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.447451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.447476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.447622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.447648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.447815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.447840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.448013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.448038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.448202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.448228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.448421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.448447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.448615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.448640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.448795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.448820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.449003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.449028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.449207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.449233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.449399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.449424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-26 23:04:33.449562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-26 23:04:33.449587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.449780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-26 23:04:33.449805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.450003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-26 23:04:33.450028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.450214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-26 23:04:33.450240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.450375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-26 23:04:33.450400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.450541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-26 23:04:33.450566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.450734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-26 23:04:33.450763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.450901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-26 23:04:33.450926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.451092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-26 23:04:33.451118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.451286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-26 23:04:33.451311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.451483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-26 23:04:33.451508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.451670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-26 23:04:33.451695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.451865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-26 23:04:33.451890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.452098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-26 23:04:33.452124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.452271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-26 23:04:33.452296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.452469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-26 23:04:33.452494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.452668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-26 23:04:33.452693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.452894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-26 23:04:33.452919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.453056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-26 23:04:33.453089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.453258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-26 23:04:33.453283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.453481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-26 23:04:33.453506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.453649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-26 23:04:33.453674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.453871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-26 23:04:33.453896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.454069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-26 23:04:33.454094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.454303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-26 23:04:33.454328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.454510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-26 23:04:33.454534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.454705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-26 23:04:33.454731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.454871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-26 23:04:33.454896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.455070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-26 23:04:33.455096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.455249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-26 23:04:33.455274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.455452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-26 23:04:33.455477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.455622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-26 23:04:33.455647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.455824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-26 23:04:33.455849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.456016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-26 23:04:33.456045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.456218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-26 23:04:33.456244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.456412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-26 23:04:33.456437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.456607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-26 23:04:33.456632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.456827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-26 23:04:33.456852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-26 23:04:33.457020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.457045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.457199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.457225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.457420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.457445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.457594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.457619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.457760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.457785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.457958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.457982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.458152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.458178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.458349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.458374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.458504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.458529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.458725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.458750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.458893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.458918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.459094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.459126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.459305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.459329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.459495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.459521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.459688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.459713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.459885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.459909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.460053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.460085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.460257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.460282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.460426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.460452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.460602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.460628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.460806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.460830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.461001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.461026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.461173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.461198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.461349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.461374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.461544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.461569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.461732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.461757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.461903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.461928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.462108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.462134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.462308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.462333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.462481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.462506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.462679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.462704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.462870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.462894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.463036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.463070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.463238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.463263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.463432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.463457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.463601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.463627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.463799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.463825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.464003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-26 23:04:33.464028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-26 23:04:33.464233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-26 23:04:33.464259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-26 23:04:33.464432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-26 23:04:33.464457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-26 23:04:33.464602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-26 23:04:33.464627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-26 23:04:33.464802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-26 23:04:33.464827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-26 23:04:33.464968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-26 23:04:33.464993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-26 23:04:33.465164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-26 23:04:33.465190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-26 23:04:33.465364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-26 23:04:33.465390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-26 23:04:33.465564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-26 23:04:33.465590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-26 23:04:33.465757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-26 23:04:33.465782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-26 23:04:33.465948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-26 23:04:33.465974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-26 23:04:33.466161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-26 23:04:33.466186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-26 23:04:33.466378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-26 23:04:33.466403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-26 23:04:33.466553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-26 23:04:33.466579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-26 23:04:33.466718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-26 23:04:33.466743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-26 23:04:33.466883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-26 23:04:33.466908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-26 23:04:33.467108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-26 23:04:33.467134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-26 23:04:33.467330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-26 23:04:33.467356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-26 23:04:33.467518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-26 23:04:33.467543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-26 23:04:33.467716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-26 23:04:33.467741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-26 23:04:33.467884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-26 23:04:33.467909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-26 23:04:33.468092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-26 23:04:33.468118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-26 23:04:33.468262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-26 23:04:33.468287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-26 23:04:33.468422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-26 23:04:33.468447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-26 23:04:33.468620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-26 23:04:33.468645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-26 23:04:33.468814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-26 23:04:33.468839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-26 23:04:33.469033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-26 23:04:33.469074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-26 23:04:33.469247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-26 23:04:33.469272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-26 23:04:33.469444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-26 23:04:33.469469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-26 23:04:33.469637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-26 23:04:33.469662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-26 23:04:33.469838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-26 23:04:33.469863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-26 23:04:33.470031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-26 23:04:33.470057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-26 23:04:33.470233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-26 23:04:33.470258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-26 23:04:33.470437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-26 23:04:33.470462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-26 23:04:33.470608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.470633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.470803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.470828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.470997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.471022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.471196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.471222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.471366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.471391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.471564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.471590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.471739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.471765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.471911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.471936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.472104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.472130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.472300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.472324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.472494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.472520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.472667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.472692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.472831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.472856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.472992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.473017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.473190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.473217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.473418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.473444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.473634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.473659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.473825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.473850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.473999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.474024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.474204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.474234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.474400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.474426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.474585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.474610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.474782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.474807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.474976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.475001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.475165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.475191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.475358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.475384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.475524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.475549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.475692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.475717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.475889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.475913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.476094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.476122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.476318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.476344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.476506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.476531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.476702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.476727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.476912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.476938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.477114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.477140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.477282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.477307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.477477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.477502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.477674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-26 23:04:33.477700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-26 23:04:33.477897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.477921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.478085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.478111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.478257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.478282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.478465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.478491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.478637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.478662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.478801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.478826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.479003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.479028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.479212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.479238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.479438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.479463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.479609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.479634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.479785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.479812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.480006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.480031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.480210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.480236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.480387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.480412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.480562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.480587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.480769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.480794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.480961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.480986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.481189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.481215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.481353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.481379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.481569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.481594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.481783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.481808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.481980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.482005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.482193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.482220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.482366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.482391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.482523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.482548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.482725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.482751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.482923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.482949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.483112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.483138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.483305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.483330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.483507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.483533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.483712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.483737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.483908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.483934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.484106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.484132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.484302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.484327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.484522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.484548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.484726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.484751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.484925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.484951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.485104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-26 23:04:33.485130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-26 23:04:33.485268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.485293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-26 23:04:33.485466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.485493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-26 23:04:33.485673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.485698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-26 23:04:33.485896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.485921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-26 23:04:33.486069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.486095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-26 23:04:33.486264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.486289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-26 23:04:33.486428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.486453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-26 23:04:33.486625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.486651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-26 23:04:33.486823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.486848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-26 23:04:33.487018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.487044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-26 23:04:33.487296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.487321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-26 23:04:33.487521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.487550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-26 23:04:33.487689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.487714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-26 23:04:33.487863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.487887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-26 23:04:33.488065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.488091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-26 23:04:33.488260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.488285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-26 23:04:33.488455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.488481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-26 23:04:33.488649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.488674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-26 23:04:33.488890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.488915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-26 23:04:33.489065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.489091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-26 23:04:33.489240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.489265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-26 23:04:33.489459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.489485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-26 23:04:33.489657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.489682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-26 23:04:33.489883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.489908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-26 23:04:33.490056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.490087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-26 23:04:33.490269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.490295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-26 23:04:33.490543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.490568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-26 23:04:33.490762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.490787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-26 23:04:33.490935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.490960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-26 23:04:33.491127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.491154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-26 23:04:33.491303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.491330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-26 23:04:33.491526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.491551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-26 23:04:33.491744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.491769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-26 23:04:33.491916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.491941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-26 23:04:33.492144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.492169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-26 23:04:33.492314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.492339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-26 23:04:33.492491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.492516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-26 23:04:33.492690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-26 23:04:33.492715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.492886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.492916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.493068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.493093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.493284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.493309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.493478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.493503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.493640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.493665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.493810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.493835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.494017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.494042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.494245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.494270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.494434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.494459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.494626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.494652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.494790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.494816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.494980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.495005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.495177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.495203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.495349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.495374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.495519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.495544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.495683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.495707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.495874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.495899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.496098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.496124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.496294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.496319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.496459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.496484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.496625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.496651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.496818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.496843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.496983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.497008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.497147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.497172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.497352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.497378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.497528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.497553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.497749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.497775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.497947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.497976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.498117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.498143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.498284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.498309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.498456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.498481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.498650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.498675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.498840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.498864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.499036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.499067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.499262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.499286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.499420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.499445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.499619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.499645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-26 23:04:33.499820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-26 23:04:33.499845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-26 23:04:33.499979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-26 23:04:33.500004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-26 23:04:33.500173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-26 23:04:33.500199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-26 23:04:33.500370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-26 23:04:33.500395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-26 23:04:33.500543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-26 23:04:33.500568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-26 23:04:33.500736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-26 23:04:33.500761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-26 23:04:33.500908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-26 23:04:33.500933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-26 23:04:33.501077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-26 23:04:33.501102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-26 23:04:33.501272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-26 23:04:33.501297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-26 23:04:33.501458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-26 23:04:33.501483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-26 23:04:33.501649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-26 23:04:33.501673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-26 23:04:33.501869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-26 23:04:33.501894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-26 23:04:33.502031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-26 23:04:33.502056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-26 23:04:33.502256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-26 23:04:33.502281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-26 23:04:33.502451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-26 23:04:33.502476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-26 23:04:33.502648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-26 23:04:33.502673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-26 23:04:33.502844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-26 23:04:33.502869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-26 23:04:33.503037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-26 23:04:33.503068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-26 23:04:33.503214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-26 23:04:33.503239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-26 23:04:33.503406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-26 23:04:33.503431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-26 23:04:33.503582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-26 23:04:33.503608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-26 23:04:33.503757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-26 23:04:33.503782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-26 23:04:33.503954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-26 23:04:33.503980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-26 23:04:33.504154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-26 23:04:33.504179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-26 23:04:33.504340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-26 23:04:33.504365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-26 23:04:33.504565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-26 23:04:33.504590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-26 23:04:33.504760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-26 23:04:33.504785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-26 23:04:33.504953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-26 23:04:33.504978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-26 23:04:33.505148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-26 23:04:33.505175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-26 23:04:33.505421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-26 23:04:33.505446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-26 23:04:33.505610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-26 23:04:33.505635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-26 23:04:33.505805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-26 23:04:33.505830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-26 23:04:33.505992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-26 23:04:33.506017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-26 23:04:33.506170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.506196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-26 23:04:33.506358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.506384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-26 23:04:33.506581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.506606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-26 23:04:33.506749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.506774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-26 23:04:33.506943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.506968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-26 23:04:33.507138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.507164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-26 23:04:33.507333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.507359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-26 23:04:33.507524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.507549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-26 23:04:33.507718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.507744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-26 23:04:33.507879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.507904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-26 23:04:33.508085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.508111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-26 23:04:33.508309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.508335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-26 23:04:33.508508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.508533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-26 23:04:33.508704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.508729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-26 23:04:33.508904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.508929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-26 23:04:33.509069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.509094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-26 23:04:33.509271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.509296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-26 23:04:33.509491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.509516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-26 23:04:33.509685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.509710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-26 23:04:33.509893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.509918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-26 23:04:33.510083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.510109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-26 23:04:33.510307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.510332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-26 23:04:33.510470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.510495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-26 23:04:33.510667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.510692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-26 23:04:33.510841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.510865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-26 23:04:33.511036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.511070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-26 23:04:33.511242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.511267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-26 23:04:33.511414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.511440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-26 23:04:33.511586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.511612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-26 23:04:33.511814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.511839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-26 23:04:33.512008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.512033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-26 23:04:33.512192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.512218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-26 23:04:33.512388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.512413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-26 23:04:33.512611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.512636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-26 23:04:33.512771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.512796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-26 23:04:33.512987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.513012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-26 23:04:33.513185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.513211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-26 23:04:33.513346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-26 23:04:33.513371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.513536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-26 23:04:33.513561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.513756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-26 23:04:33.513781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.513957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-26 23:04:33.513982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.514145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-26 23:04:33.514171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.514309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-26 23:04:33.514334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.514473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-26 23:04:33.514498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.514674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-26 23:04:33.514699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.514837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-26 23:04:33.514862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.515062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-26 23:04:33.515088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.515259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-26 23:04:33.515284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.515449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-26 23:04:33.515474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.515612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-26 23:04:33.515637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.515813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-26 23:04:33.515838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.515986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-26 23:04:33.516011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.516151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-26 23:04:33.516181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.516354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-26 23:04:33.516380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.516565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-26 23:04:33.516590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.516764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-26 23:04:33.516789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.516957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-26 23:04:33.516982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.517179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-26 23:04:33.517205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.517402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-26 23:04:33.517427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.517572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-26 23:04:33.517597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.517763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-26 23:04:33.517788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.517961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-26 23:04:33.517987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.518157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-26 23:04:33.518182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.518349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-26 23:04:33.518375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.518529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-26 23:04:33.518554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.518698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-26 23:04:33.518723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.518922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-26 23:04:33.518947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.519127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-26 23:04:33.519153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.519322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-26 23:04:33.519347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.519490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-26 23:04:33.519515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.519713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-26 23:04:33.519738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.519904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-26 23:04:33.519929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.520069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-26 23:04:33.520095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.520284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-26 23:04:33.520309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.520456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-26 23:04:33.520481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-26 23:04:33.520677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-26 23:04:33.520702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-26 23:04:33.520870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-26 23:04:33.520895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-26 23:04:33.521069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-26 23:04:33.521094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-26 23:04:33.521292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-26 23:04:33.521317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-26 23:04:33.521488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-26 23:04:33.521512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-26 23:04:33.521700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-26 23:04:33.521725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-26 23:04:33.521887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-26 23:04:33.521912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-26 23:04:33.522107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-26 23:04:33.522133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-26 23:04:33.522307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-26 23:04:33.522334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-26 23:04:33.522476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-26 23:04:33.522501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-26 23:04:33.522663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-26 23:04:33.522688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-26 23:04:33.522828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-26 23:04:33.522853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-26 23:04:33.523006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-26 23:04:33.523030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-26 23:04:33.523186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-26 23:04:33.523212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-26 23:04:33.523381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-26 23:04:33.523408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-26 23:04:33.523580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-26 23:04:33.523605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-26 23:04:33.523799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-26 23:04:33.523824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-26 23:04:33.523990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-26 23:04:33.524015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-26 23:04:33.524181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-26 23:04:33.524207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-26 23:04:33.524404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-26 23:04:33.524429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-26 23:04:33.524577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-26 23:04:33.524603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-26 23:04:33.524783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-26 23:04:33.524808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-26 23:04:33.525015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-26 23:04:33.525040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-26 23:04:33.525232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-26 23:04:33.525257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-26 23:04:33.525419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-26 23:04:33.525444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-26 23:04:33.525576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-26 23:04:33.525601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-26 23:04:33.525745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-26 23:04:33.525771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-26 23:04:33.525940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-26 23:04:33.525965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-26 23:04:33.526104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-26 23:04:33.526130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-26 23:04:33.526326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-26 23:04:33.526351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-26 23:04:33.526520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-26 23:04:33.526544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-26 23:04:33.526742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-26 23:04:33.526767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-26 23:04:33.526934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-26 23:04:33.526959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-26 23:04:33.527105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-26 23:04:33.527130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-26 23:04:33.527276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-26 23:04:33.527301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-26 23:04:33.527446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-26 23:04:33.527471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-26 23:04:33.527638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-26 23:04:33.527663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-26 23:04:33.527830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.527855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-26 23:04:33.527990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.528014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-26 23:04:33.528221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.528246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-26 23:04:33.528418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.528443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-26 23:04:33.528631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.528656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-26 23:04:33.528856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.528881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-26 23:04:33.529048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.529078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-26 23:04:33.529216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.529241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-26 23:04:33.529434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.529463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-26 23:04:33.529662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.529687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-26 23:04:33.529856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.529880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-26 23:04:33.530021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.530046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-26 23:04:33.530199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.530226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-26 23:04:33.530395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.530420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-26 23:04:33.530562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.530587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-26 23:04:33.530749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.530774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-26 23:04:33.530919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.530944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-26 23:04:33.531118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.531144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-26 23:04:33.531292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.531317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-26 23:04:33.531460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.531485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-26 23:04:33.531633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.531658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-26 23:04:33.531829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.531854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-26 23:04:33.532031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.532056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-26 23:04:33.532240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.532265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-26 23:04:33.532409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.532434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-26 23:04:33.532638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.532664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-26 23:04:33.532834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.532859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-26 23:04:33.533008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.533033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-26 23:04:33.533240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.533265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-26 23:04:33.533420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.533445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-26 23:04:33.533582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.533607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-26 23:04:33.533783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.533807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-26 23:04:33.533976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.534001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-26 23:04:33.534147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.534173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-26 23:04:33.534343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.534368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-26 23:04:33.534541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.534570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-26 23:04:33.534721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.534746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-26 23:04:33.534924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-26 23:04:33.534949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-26 23:04:33.535102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-26 23:04:33.535128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-26 23:04:33.535307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-26 23:04:33.535332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-26 23:04:33.535469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-26 23:04:33.535494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-26 23:04:33.535656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-26 23:04:33.535681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-26 23:04:33.535849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-26 23:04:33.535874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-26 23:04:33.536011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-26 23:04:33.536036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-26 23:04:33.536213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-26 23:04:33.536239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-26 23:04:33.536408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-26 23:04:33.536433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-26 23:04:33.536602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-26 23:04:33.536626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-26 23:04:33.536799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-26 23:04:33.536824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-26 23:04:33.536957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-26 23:04:33.536982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-26 23:04:33.537134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-26 23:04:33.537161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-26 23:04:33.537308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-26 23:04:33.537334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-26 23:04:33.537500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-26 23:04:33.537526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-26 23:04:33.537690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-26 23:04:33.537715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-26 23:04:33.537913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-26 23:04:33.537938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-26 23:04:33.538106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-26 23:04:33.538132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-26 23:04:33.538296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-26 23:04:33.538321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-26 23:04:33.538465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-26 23:04:33.538490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-26 23:04:33.538691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-26 23:04:33.538716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-26 23:04:33.538888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-26 23:04:33.538913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-26 23:04:33.539112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-26 23:04:33.539138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-26 23:04:33.539311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-26 23:04:33.539336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-26 23:04:33.539477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-26 23:04:33.539502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-26 23:04:33.539645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-26 23:04:33.539675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-26 23:04:33.539854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-26 23:04:33.539879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-26 23:04:33.540048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-26 23:04:33.540091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-26 23:04:33.540265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-26 23:04:33.540291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-26 23:04:33.540434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-26 23:04:33.540458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-26 23:04:33.540653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-26 23:04:33.540678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-26 23:04:33.540820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-26 23:04:33.540845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-26 23:04:33.540978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-26 23:04:33.541004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-26 23:04:33.541201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.541227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-26 23:04:33.541425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.541450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-26 23:04:33.541620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.541646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-26 23:04:33.541816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.541841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-26 23:04:33.542008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.542033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-26 23:04:33.542188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.542214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-26 23:04:33.542415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.542440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-26 23:04:33.542611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.542636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-26 23:04:33.542811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.542836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-26 23:04:33.543006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.543031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-26 23:04:33.543220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.543245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-26 23:04:33.543382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.543407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-26 23:04:33.543604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.543630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-26 23:04:33.543766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.543791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-26 23:04:33.543962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.543987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-26 23:04:33.544185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.544210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-26 23:04:33.544384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.544409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-26 23:04:33.544606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.544632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-26 23:04:33.544796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.544821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-26 23:04:33.544990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.545015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-26 23:04:33.545183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.545209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-26 23:04:33.545356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.545381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-26 23:04:33.545583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.545607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-26 23:04:33.545750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.545775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-26 23:04:33.545973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.545998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-26 23:04:33.546143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.546168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-26 23:04:33.546318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.546345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-26 23:04:33.546544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.546569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-26 23:04:33.546722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.546747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-26 23:04:33.546927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.546952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-26 23:04:33.547130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.547156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-26 23:04:33.547326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.547351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-26 23:04:33.547521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.547546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-26 23:04:33.547748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.547773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-26 23:04:33.547908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.547933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-26 23:04:33.548106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.548133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-26 23:04:33.548301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.548325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-26 23:04:33.548476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-26 23:04:33.548501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.548673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-26 23:04:33.548697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.548837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-26 23:04:33.548862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.549028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-26 23:04:33.549053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.549269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-26 23:04:33.549294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.549451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-26 23:04:33.549476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.549649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-26 23:04:33.549674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.549844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-26 23:04:33.549869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.550069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-26 23:04:33.550095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.550259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-26 23:04:33.550284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.550436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-26 23:04:33.550461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.550632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-26 23:04:33.550657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.550823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-26 23:04:33.550848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.550983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-26 23:04:33.551008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.551206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-26 23:04:33.551231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.551406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-26 23:04:33.551431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.551600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-26 23:04:33.551625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.551792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-26 23:04:33.551817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.551966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-26 23:04:33.551992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.552191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-26 23:04:33.552217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.552366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-26 23:04:33.552391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.552563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-26 23:04:33.552588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.552754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-26 23:04:33.552778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.552944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-26 23:04:33.552973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.553144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-26 23:04:33.553170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.553364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-26 23:04:33.553389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.553581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-26 23:04:33.553606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.553743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-26 23:04:33.553770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.553943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-26 23:04:33.553968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.554161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-26 23:04:33.554187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.554349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-26 23:04:33.554374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.554540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-26 23:04:33.554565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.554702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-26 23:04:33.554728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.554902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-26 23:04:33.554927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.555098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-26 23:04:33.555123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.555295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-26 23:04:33.555320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.555488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-26 23:04:33.555513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.555713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-26 23:04:33.555738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-26 23:04:33.555885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.555910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.556077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.556103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.556274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.556299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.556469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.556494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.556645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.556676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.556843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.556868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.557037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.557074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.557247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.557272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.557438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.557463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.557600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.557625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.557823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.557848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.558042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.558074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.558241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.558272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.558471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.558496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.558632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.558657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.558827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.558852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.559001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.559026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.559174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.559199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.559405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.559430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.559604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.559630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.559766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.559791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.559961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.559987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.560123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.560148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.560347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.560372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.560534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.560559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.560754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.560779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.560951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.560976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.561140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.561166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.561335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.561360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.561565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.561590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.561738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.561763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.561936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.561962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.562103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.562129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.562311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.562336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.562535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.562561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.562757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.562782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.562953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.562977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.563144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-26 23:04:33.563170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-26 23:04:33.563341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-26 23:04:33.563366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-26 23:04:33.563517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-26 23:04:33.563543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-26 23:04:33.563723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-26 23:04:33.563748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-26 23:04:33.563945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-26 23:04:33.563970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-26 23:04:33.564165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-26 23:04:33.564191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-26 23:04:33.564348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-26 23:04:33.564373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-26 23:04:33.564520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-26 23:04:33.564546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-26 23:04:33.564685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-26 23:04:33.564710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-26 23:04:33.564884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-26 23:04:33.564909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-26 23:04:33.565051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-26 23:04:33.565088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-26 23:04:33.565258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-26 23:04:33.565283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-26 23:04:33.565426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-26 23:04:33.565455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-26 23:04:33.565622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-26 23:04:33.565647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-26 23:04:33.565816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-26 23:04:33.565841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-26 23:04:33.565981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-26 23:04:33.566006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-26 23:04:33.566151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-26 23:04:33.566177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-26 23:04:33.566349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-26 23:04:33.566375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-26 23:04:33.566557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-26 23:04:33.566582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-26 23:04:33.566750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-26 23:04:33.566775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-26 23:04:33.566915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-26 23:04:33.566940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-26 23:04:33.567133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-26 23:04:33.567159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-26 23:04:33.567328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-26 23:04:33.567353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-26 23:04:33.567522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-26 23:04:33.567547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-26 23:04:33.567718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-26 23:04:33.567743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-26 23:04:33.567918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-26 23:04:33.567943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-26 23:04:33.568087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-26 23:04:33.568113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-26 23:04:33.568281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-26 23:04:33.568306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-26 23:04:33.568458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-26 23:04:33.568484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-26 23:04:33.568682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-26 23:04:33.568707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-26 23:04:33.568854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-26 23:04:33.568879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-26 23:04:33.569047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-26 23:04:33.569094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-26 23:04:33.569242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-26 23:04:33.569268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-26 23:04:33.569433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-26 23:04:33.569459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-26 23:04:33.569624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.569649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.569845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.569870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.570016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.570041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.570248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.570273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.570410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.570435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.570611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.570636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.570774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.570799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.570977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.571002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.571167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.571193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.571331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.571361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.571531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.571557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.571705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.571730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.571882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.571907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.572088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.572114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.572282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.572307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.572476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.572501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.572698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.572723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.572863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.572888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.573084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.573110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.573274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.573299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.573465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.573490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.573687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.573712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.573861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.573886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.574034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.574064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.574248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.574274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.574471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.574496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.574669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.574694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.574840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.574865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.575008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.575033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.575181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.575206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.575379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.575404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.575608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.575633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.575799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.575824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.576016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.576042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.576241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.576266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.576408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.576434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.576605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.576634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.576803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.576828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.577004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.577029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-26 23:04:33.577218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-26 23:04:33.577245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-26 23:04:33.577391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-26 23:04:33.577416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-26 23:04:33.577590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-26 23:04:33.577617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-26 23:04:33.577786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-26 23:04:33.577811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-26 23:04:33.577979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-26 23:04:33.578004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-26 23:04:33.578149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-26 23:04:33.578175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-26 23:04:33.578368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-26 23:04:33.578394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-26 23:04:33.578563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-26 23:04:33.578588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-26 23:04:33.578724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-26 23:04:33.578749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-26 23:04:33.578889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-26 23:04:33.578914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-26 23:04:33.579082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-26 23:04:33.579120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-26 23:04:33.579300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-26 23:04:33.579326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-26 23:04:33.579498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-26 23:04:33.579523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-26 23:04:33.579692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-26 23:04:33.579717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-26 23:04:33.579858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-26 23:04:33.579882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-26 23:04:33.580057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-26 23:04:33.580089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-26 23:04:33.580258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-26 23:04:33.580283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-26 23:04:33.580419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-26 23:04:33.580444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-26 23:04:33.580608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-26 23:04:33.580634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-26 23:04:33.580775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-26 23:04:33.580801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-26 23:04:33.580941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-26 23:04:33.580966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-26 23:04:33.581106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-26 23:04:33.581131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-26 23:04:33.581294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-26 23:04:33.581320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-26 23:04:33.581491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-26 23:04:33.581516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-26 23:04:33.581706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-26 23:04:33.581735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-26 23:04:33.581907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-26 23:04:33.581932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-26 23:04:33.582130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-26 23:04:33.582156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.428 [2024-07-26 23:04:33.582354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.428 [2024-07-26 23:04:33.582381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.428 qpair failed and we were unable to recover it. 00:34:41.428 [2024-07-26 23:04:33.582554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.428 [2024-07-26 23:04:33.582579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.428 qpair failed and we were unable to recover it. 00:34:41.428 [2024-07-26 23:04:33.582721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.428 [2024-07-26 23:04:33.582748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.428 qpair failed and we were unable to recover it. 00:34:41.428 [2024-07-26 23:04:33.582914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.428 [2024-07-26 23:04:33.582940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.428 qpair failed and we were unable to recover it. 00:34:41.428 [2024-07-26 23:04:33.583116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.428 [2024-07-26 23:04:33.583143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.428 qpair failed and we were unable to recover it. 00:34:41.428 [2024-07-26 23:04:33.583339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.428 [2024-07-26 23:04:33.583364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.428 qpair failed and we were unable to recover it. 00:34:41.428 [2024-07-26 23:04:33.583500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.428 [2024-07-26 23:04:33.583526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.428 qpair failed and we were unable to recover it. 00:34:41.428 [2024-07-26 23:04:33.583676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.428 [2024-07-26 23:04:33.583702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.428 qpair failed and we were unable to recover it. 00:34:41.428 [2024-07-26 23:04:33.583867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.428 [2024-07-26 23:04:33.583892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.428 qpair failed and we were unable to recover it. 00:34:41.428 [2024-07-26 23:04:33.584068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.428 [2024-07-26 23:04:33.584094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.428 qpair failed and we were unable to recover it. 00:34:41.428 [2024-07-26 23:04:33.584262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.428 [2024-07-26 23:04:33.584288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.428 qpair failed and we were unable to recover it. 00:34:41.428 [2024-07-26 23:04:33.584486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.428 [2024-07-26 23:04:33.584512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.428 qpair failed and we were unable to recover it. 00:34:41.428 [2024-07-26 23:04:33.584684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.428 [2024-07-26 23:04:33.584710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.428 qpair failed and we were unable to recover it. 00:34:41.428 [2024-07-26 23:04:33.584875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.428 [2024-07-26 23:04:33.584901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.428 qpair failed and we were unable to recover it. 00:34:41.428 [2024-07-26 23:04:33.585052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.428 [2024-07-26 23:04:33.585091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.428 qpair failed and we were unable to recover it. 00:34:41.428 [2024-07-26 23:04:33.585263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.428 [2024-07-26 23:04:33.585288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.428 qpair failed and we were unable to recover it. 00:34:41.428 [2024-07-26 23:04:33.585426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.428 [2024-07-26 23:04:33.585451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.428 qpair failed and we were unable to recover it. 00:34:41.428 [2024-07-26 23:04:33.585612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.428 [2024-07-26 23:04:33.585637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.428 qpair failed and we were unable to recover it. 00:34:41.428 [2024-07-26 23:04:33.585780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.428 [2024-07-26 23:04:33.585806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.428 qpair failed and we were unable to recover it. 00:34:41.428 [2024-07-26 23:04:33.585973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.428 [2024-07-26 23:04:33.585998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.428 qpair failed and we were unable to recover it. 00:34:41.428 [2024-07-26 23:04:33.586165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.428 [2024-07-26 23:04:33.586191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.428 qpair failed and we were unable to recover it. 00:34:41.428 [2024-07-26 23:04:33.586376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.428 [2024-07-26 23:04:33.586401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.428 qpair failed and we were unable to recover it. 00:34:41.428 [2024-07-26 23:04:33.586558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.428 [2024-07-26 23:04:33.586584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.428 qpair failed and we were unable to recover it. 00:34:41.428 [2024-07-26 23:04:33.586791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.428 [2024-07-26 23:04:33.586816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.428 qpair failed and we were unable to recover it. 00:34:41.428 [2024-07-26 23:04:33.586958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.428 [2024-07-26 23:04:33.586983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.428 qpair failed and we were unable to recover it. 00:34:41.428 [2024-07-26 23:04:33.587163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.428 [2024-07-26 23:04:33.587189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.428 qpair failed and we were unable to recover it. 00:34:41.428 [2024-07-26 23:04:33.587335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.428 [2024-07-26 23:04:33.587360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.428 qpair failed and we were unable to recover it. 00:34:41.428 [2024-07-26 23:04:33.587516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.587541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.429 [2024-07-26 23:04:33.587683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.587708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.429 [2024-07-26 23:04:33.587885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.587911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.429 [2024-07-26 23:04:33.588088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.588114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.429 [2024-07-26 23:04:33.588280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.588304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.429 [2024-07-26 23:04:33.588467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.588492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.429 [2024-07-26 23:04:33.588687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.588712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.429 [2024-07-26 23:04:33.588877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.588902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.429 [2024-07-26 23:04:33.589081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.589106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.429 [2024-07-26 23:04:33.589266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.589291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.429 [2024-07-26 23:04:33.589440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.589466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.429 [2024-07-26 23:04:33.589616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.589641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.429 [2024-07-26 23:04:33.589786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.589813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.429 [2024-07-26 23:04:33.589983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.590008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.429 [2024-07-26 23:04:33.590163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.590190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.429 [2024-07-26 23:04:33.590364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.590389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.429 [2024-07-26 23:04:33.590555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.590580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.429 [2024-07-26 23:04:33.590772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.590798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.429 [2024-07-26 23:04:33.590938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.590962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.429 [2024-07-26 23:04:33.591137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.591163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.429 [2024-07-26 23:04:33.591308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.591333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.429 [2024-07-26 23:04:33.591506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.591531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.429 [2024-07-26 23:04:33.591695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.591720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.429 [2024-07-26 23:04:33.591857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.591882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.429 [2024-07-26 23:04:33.592028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.592053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.429 [2024-07-26 23:04:33.592206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.592232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.429 [2024-07-26 23:04:33.592401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.592426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.429 [2024-07-26 23:04:33.592593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.592620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.429 [2024-07-26 23:04:33.592815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.592841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.429 [2024-07-26 23:04:33.593011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.593036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.429 [2024-07-26 23:04:33.593214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.593240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.429 [2024-07-26 23:04:33.593402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.593428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.429 [2024-07-26 23:04:33.593611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.593635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.429 [2024-07-26 23:04:33.593806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.593831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.429 [2024-07-26 23:04:33.594021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.594046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.429 [2024-07-26 23:04:33.594230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.594256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.429 [2024-07-26 23:04:33.594404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.594429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.429 [2024-07-26 23:04:33.594601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.429 [2024-07-26 23:04:33.594626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.429 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.594773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.430 [2024-07-26 23:04:33.594803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.430 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.594973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.430 [2024-07-26 23:04:33.594998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.430 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.595137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.430 [2024-07-26 23:04:33.595164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.430 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.595346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.430 [2024-07-26 23:04:33.595371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.430 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.595537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.430 [2024-07-26 23:04:33.595562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.430 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.595741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.430 [2024-07-26 23:04:33.595766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.430 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.595933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.430 [2024-07-26 23:04:33.595957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.430 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.596139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.430 [2024-07-26 23:04:33.596165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.430 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.596362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.430 [2024-07-26 23:04:33.596388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.430 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.596531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.430 [2024-07-26 23:04:33.596556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.430 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.596731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.430 [2024-07-26 23:04:33.596756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.430 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.596925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.430 [2024-07-26 23:04:33.596950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.430 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.597118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.430 [2024-07-26 23:04:33.597144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.430 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.597314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.430 [2024-07-26 23:04:33.597339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.430 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.597518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.430 [2024-07-26 23:04:33.597543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.430 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.597724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.430 [2024-07-26 23:04:33.597749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.430 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.597886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.430 [2024-07-26 23:04:33.597911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.430 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.598078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.430 [2024-07-26 23:04:33.598104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.430 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.598280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.430 [2024-07-26 23:04:33.598305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.430 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.598475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.430 [2024-07-26 23:04:33.598500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.430 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.598675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.430 [2024-07-26 23:04:33.598700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.430 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.598872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.430 [2024-07-26 23:04:33.598897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.430 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.599038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.430 [2024-07-26 23:04:33.599070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.430 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.599245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.430 [2024-07-26 23:04:33.599269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.430 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.599406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.430 [2024-07-26 23:04:33.599432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.430 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.599592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.430 [2024-07-26 23:04:33.599617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.430 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.599787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.430 [2024-07-26 23:04:33.599813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.430 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.599982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.430 [2024-07-26 23:04:33.600011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.430 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.600194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.430 [2024-07-26 23:04:33.600220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.430 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.600359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.430 [2024-07-26 23:04:33.600384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.430 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.600554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.430 [2024-07-26 23:04:33.600579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.430 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.600781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.430 [2024-07-26 23:04:33.600806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.430 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.600941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.430 [2024-07-26 23:04:33.600966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.430 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.601132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.430 [2024-07-26 23:04:33.601158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.430 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.601297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.430 [2024-07-26 23:04:33.601322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.430 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.601494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.430 [2024-07-26 23:04:33.601519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.430 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.601658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.430 [2024-07-26 23:04:33.601683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.430 qpair failed and we were unable to recover it. 00:34:41.430 [2024-07-26 23:04:33.601851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.601876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.602049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.602084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.602252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.602278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.602418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.602443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.602591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.602616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.602758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.602783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.602958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.602983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.603125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.603151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.603297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.603323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.603491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.603515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.603682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.603707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.603881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.603906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.604077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.604103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.604274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.604299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.604470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.604495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.604663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.604688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.604935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.604960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.605134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.605160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.605310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.605336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.605582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.605607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.605768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.605793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.605967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.605992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.606132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.606158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.606305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.606329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.606576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.606601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.606764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.606790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.606934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.606959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.607094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.607120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.607290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.607314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.607483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.607508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.607671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.607696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.607837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.607862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.608036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.608069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.608217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.608243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.608416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.608440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.608575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.608601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.608749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.608774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.608944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.431 [2024-07-26 23:04:33.608969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.431 qpair failed and we were unable to recover it. 00:34:41.431 [2024-07-26 23:04:33.609150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.432 [2024-07-26 23:04:33.609175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.432 qpair failed and we were unable to recover it. 00:34:41.432 [2024-07-26 23:04:33.609344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.432 [2024-07-26 23:04:33.609370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.432 qpair failed and we were unable to recover it. 00:34:41.432 [2024-07-26 23:04:33.609541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.432 [2024-07-26 23:04:33.609566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.432 qpair failed and we were unable to recover it. 00:34:41.432 [2024-07-26 23:04:33.609812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.432 [2024-07-26 23:04:33.609837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.432 qpair failed and we were unable to recover it. 00:34:41.432 [2024-07-26 23:04:33.609982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.432 [2024-07-26 23:04:33.610008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.432 qpair failed and we were unable to recover it. 00:34:41.432 [2024-07-26 23:04:33.610184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.432 [2024-07-26 23:04:33.610209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.432 qpair failed and we were unable to recover it. 00:34:41.432 [2024-07-26 23:04:33.610405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.432 [2024-07-26 23:04:33.610430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.432 qpair failed and we were unable to recover it. 00:34:41.432 [2024-07-26 23:04:33.610617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.432 [2024-07-26 23:04:33.610642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.432 qpair failed and we were unable to recover it. 00:34:41.432 [2024-07-26 23:04:33.610887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.432 [2024-07-26 23:04:33.610912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.432 qpair failed and we were unable to recover it. 00:34:41.432 [2024-07-26 23:04:33.611082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.432 [2024-07-26 23:04:33.611108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.432 qpair failed and we were unable to recover it. 00:34:41.432 [2024-07-26 23:04:33.611277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.432 [2024-07-26 23:04:33.611303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.432 qpair failed and we were unable to recover it. 00:34:41.432 [2024-07-26 23:04:33.611465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.432 [2024-07-26 23:04:33.611490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.432 qpair failed and we were unable to recover it. 00:34:41.432 [2024-07-26 23:04:33.611658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.432 [2024-07-26 23:04:33.611683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.432 qpair failed and we were unable to recover it. 00:34:41.432 [2024-07-26 23:04:33.611831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.432 [2024-07-26 23:04:33.611856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.432 qpair failed and we were unable to recover it. 00:34:41.432 [2024-07-26 23:04:33.612002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.432 [2024-07-26 23:04:33.612026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.432 qpair failed and we were unable to recover it. 00:34:41.432 [2024-07-26 23:04:33.612208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.432 [2024-07-26 23:04:33.612235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.432 qpair failed and we were unable to recover it. 00:34:41.432 [2024-07-26 23:04:33.612401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.432 [2024-07-26 23:04:33.612426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.432 qpair failed and we were unable to recover it. 00:34:41.432 [2024-07-26 23:04:33.612620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.432 [2024-07-26 23:04:33.612645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.432 qpair failed and we were unable to recover it. 00:34:41.432 [2024-07-26 23:04:33.612820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.432 [2024-07-26 23:04:33.612845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.432 qpair failed and we were unable to recover it. 00:34:41.432 [2024-07-26 23:04:33.612982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.432 [2024-07-26 23:04:33.613007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.432 qpair failed and we were unable to recover it. 00:34:41.432 [2024-07-26 23:04:33.613150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.432 [2024-07-26 23:04:33.613180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.432 qpair failed and we were unable to recover it. 00:34:41.432 [2024-07-26 23:04:33.613365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.432 [2024-07-26 23:04:33.613391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.432 qpair failed and we were unable to recover it. 00:34:41.432 [2024-07-26 23:04:33.613526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.432 [2024-07-26 23:04:33.613551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.432 qpair failed and we were unable to recover it. 00:34:41.432 [2024-07-26 23:04:33.613713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.432 [2024-07-26 23:04:33.613738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.432 qpair failed and we were unable to recover it. 00:34:41.432 [2024-07-26 23:04:33.613906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.432 [2024-07-26 23:04:33.613931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.432 qpair failed and we were unable to recover it. 00:34:41.432 [2024-07-26 23:04:33.614125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.432 [2024-07-26 23:04:33.614151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.432 qpair failed and we were unable to recover it. 00:34:41.432 [2024-07-26 23:04:33.614317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.432 [2024-07-26 23:04:33.614342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.432 qpair failed and we were unable to recover it. 00:34:41.432 [2024-07-26 23:04:33.614507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.432 [2024-07-26 23:04:33.614532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.432 qpair failed and we were unable to recover it. 00:34:41.432 [2024-07-26 23:04:33.614702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.432 [2024-07-26 23:04:33.614727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.432 qpair failed and we were unable to recover it. 00:34:41.432 [2024-07-26 23:04:33.614922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.432 [2024-07-26 23:04:33.614947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.432 qpair failed and we were unable to recover it. 00:34:41.432 [2024-07-26 23:04:33.615118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.432 [2024-07-26 23:04:33.615143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.432 qpair failed and we were unable to recover it. 00:34:41.432 [2024-07-26 23:04:33.615309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.433 [2024-07-26 23:04:33.615334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.433 qpair failed and we were unable to recover it. 00:34:41.433 [2024-07-26 23:04:33.615507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.433 [2024-07-26 23:04:33.615534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.433 qpair failed and we were unable to recover it. 00:34:41.433 [2024-07-26 23:04:33.615711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.433 [2024-07-26 23:04:33.615736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.433 qpair failed and we were unable to recover it. 00:34:41.433 [2024-07-26 23:04:33.615881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.433 [2024-07-26 23:04:33.615906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.433 qpair failed and we were unable to recover it. 00:34:41.433 [2024-07-26 23:04:33.616073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.433 [2024-07-26 23:04:33.616099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.433 qpair failed and we were unable to recover it. 00:34:41.433 [2024-07-26 23:04:33.616264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.433 [2024-07-26 23:04:33.616289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.433 qpair failed and we were unable to recover it. 00:34:41.433 [2024-07-26 23:04:33.616460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.433 [2024-07-26 23:04:33.616486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.433 qpair failed and we were unable to recover it. 00:34:41.433 [2024-07-26 23:04:33.616658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.433 [2024-07-26 23:04:33.616683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.433 qpair failed and we were unable to recover it. 00:34:41.433 [2024-07-26 23:04:33.616847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.433 [2024-07-26 23:04:33.616872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.433 qpair failed and we were unable to recover it. 00:34:41.433 [2024-07-26 23:04:33.617086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.433 [2024-07-26 23:04:33.617112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.433 qpair failed and we were unable to recover it. 00:34:41.433 [2024-07-26 23:04:33.617258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.433 [2024-07-26 23:04:33.617283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.433 qpair failed and we were unable to recover it. 00:34:41.433 [2024-07-26 23:04:33.617450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.433 [2024-07-26 23:04:33.617475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.433 qpair failed and we were unable to recover it. 00:34:41.433 [2024-07-26 23:04:33.617641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.433 [2024-07-26 23:04:33.617666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.433 qpair failed and we were unable to recover it. 00:34:41.433 [2024-07-26 23:04:33.617863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.433 [2024-07-26 23:04:33.617888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.433 qpair failed and we were unable to recover it. 00:34:41.433 [2024-07-26 23:04:33.618085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.433 [2024-07-26 23:04:33.618111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.433 qpair failed and we were unable to recover it. 00:34:41.433 [2024-07-26 23:04:33.618262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.433 [2024-07-26 23:04:33.618287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.433 qpair failed and we were unable to recover it. 00:34:41.433 [2024-07-26 23:04:33.618453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.433 [2024-07-26 23:04:33.618482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.433 qpair failed and we were unable to recover it. 00:34:41.433 [2024-07-26 23:04:33.618645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.433 [2024-07-26 23:04:33.618670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.433 qpair failed and we were unable to recover it. 00:34:41.433 [2024-07-26 23:04:33.618818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.433 [2024-07-26 23:04:33.618843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.433 qpair failed and we were unable to recover it. 00:34:41.433 [2024-07-26 23:04:33.619012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.433 [2024-07-26 23:04:33.619037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.433 qpair failed and we were unable to recover it. 00:34:41.433 [2024-07-26 23:04:33.619184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.433 [2024-07-26 23:04:33.619210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.433 qpair failed and we were unable to recover it. 00:34:41.433 [2024-07-26 23:04:33.619353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.433 [2024-07-26 23:04:33.619379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.433 qpair failed and we were unable to recover it. 00:34:41.433 [2024-07-26 23:04:33.619545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.433 [2024-07-26 23:04:33.619570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.433 qpair failed and we were unable to recover it. 00:34:41.433 [2024-07-26 23:04:33.619709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.433 [2024-07-26 23:04:33.619734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.433 qpair failed and we were unable to recover it. 00:34:41.433 [2024-07-26 23:04:33.619879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.433 [2024-07-26 23:04:33.619904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.433 qpair failed and we were unable to recover it. 00:34:41.433 [2024-07-26 23:04:33.620085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.433 [2024-07-26 23:04:33.620111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.433 qpair failed and we were unable to recover it. 00:34:41.433 [2024-07-26 23:04:33.620265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.433 [2024-07-26 23:04:33.620290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.433 qpair failed and we were unable to recover it. 00:34:41.433 [2024-07-26 23:04:33.620431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.433 [2024-07-26 23:04:33.620456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.433 qpair failed and we were unable to recover it. 00:34:41.433 [2024-07-26 23:04:33.620601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.433 [2024-07-26 23:04:33.620626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.433 qpair failed and we were unable to recover it. 00:34:41.433 [2024-07-26 23:04:33.620789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.433 [2024-07-26 23:04:33.620814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.433 qpair failed and we were unable to recover it. 00:34:41.433 [2024-07-26 23:04:33.621000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.433 [2024-07-26 23:04:33.621025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.433 qpair failed and we were unable to recover it. 00:34:41.433 [2024-07-26 23:04:33.621206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.433 [2024-07-26 23:04:33.621232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.433 qpair failed and we were unable to recover it. 00:34:41.433 [2024-07-26 23:04:33.621401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.433 [2024-07-26 23:04:33.621426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.433 qpair failed and we were unable to recover it. 00:34:41.433 [2024-07-26 23:04:33.621592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.433 [2024-07-26 23:04:33.621617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.433 qpair failed and we were unable to recover it. 00:34:41.433 [2024-07-26 23:04:33.621812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.621837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.622019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.622044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.622215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.622241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.622393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.622418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.622598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.622623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.622765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.622792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.622957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.622983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.623124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.623151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.623323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.623349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.623492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.623521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.623664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.623689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.623824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.623849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.624023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.624048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.624210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.624235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.624373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.624399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.624535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.624561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.624700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.624725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.624897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.624922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.625083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.625109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.625277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.625303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.625448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.625474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.625642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.625667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.625917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.625942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.626083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.626120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.626260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.626286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.626457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.626482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.626649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.626674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.626812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.626838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.627011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.627037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.627198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.627224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.627421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.627447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.627596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.627622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.627811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.627836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.627977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.628003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.628153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.628179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.628344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.628370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.628541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.628566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.628743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.628768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.434 qpair failed and we were unable to recover it. 00:34:41.434 [2024-07-26 23:04:33.628935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.434 [2024-07-26 23:04:33.628960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.629157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.629183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.629360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.629385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.629586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.629611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.629776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.629801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.630047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.630079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.630248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.630274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.630417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.630443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.630609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.630634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.630799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.630824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.631018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.631043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.631199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.631224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.631397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.631423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.631670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.631695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.631895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.631920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.632100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.632126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.632321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.632346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.632514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.632539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.632690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.632716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.632855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.632881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.633083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.633109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.633281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.633307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.633456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.633481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.633627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.633653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.633899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.633924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.634100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.634127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.634300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.634333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.634504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.634529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.634674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.634699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.634843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.634868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.635007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.635033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.635224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.635250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.635426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.635451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.635620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.635646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.635840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.635866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.636042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.636074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.636245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.636271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.636449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.636474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.435 [2024-07-26 23:04:33.636639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.435 [2024-07-26 23:04:33.636664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.435 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.636839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.636871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.436 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.637035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.637068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.436 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.637236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.637261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.436 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.637412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.637437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.436 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.637612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.637637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.436 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.637827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.637853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.436 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.637987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.638012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.436 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.638215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.638240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.436 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.638378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.638404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.436 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.638600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.638625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.436 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.638802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.638827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.436 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.638978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.639003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.436 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.639197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.639223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.436 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.639390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.639415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.436 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.639615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.639640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.436 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.639839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.639864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.436 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.640025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.640050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.436 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.640239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.640265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.436 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.640460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.640485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.436 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.640635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.640661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.436 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.640829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.640855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.436 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.640996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.641021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.436 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.641202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.641227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.436 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.641374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.641399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.436 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.641566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.641591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.436 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.641785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.641810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.436 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.641947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.641972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.436 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.642126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.642158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.436 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.642299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.642324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.436 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.642520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.642546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.436 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.642714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.642739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.436 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.642914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.642940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.436 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.643108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.643135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.436 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.643283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.643308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.436 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.643475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.643500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.436 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.643674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.643700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.436 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.643863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.643888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.436 qpair failed and we were unable to recover it. 00:34:41.436 [2024-07-26 23:04:33.644028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.436 [2024-07-26 23:04:33.644055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.437 qpair failed and we were unable to recover it. 00:34:41.437 [2024-07-26 23:04:33.644208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.437 [2024-07-26 23:04:33.644234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.437 qpair failed and we were unable to recover it. 00:34:41.437 [2024-07-26 23:04:33.644409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.437 [2024-07-26 23:04:33.644434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.437 qpair failed and we were unable to recover it. 00:34:41.437 [2024-07-26 23:04:33.644580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.437 [2024-07-26 23:04:33.644606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.437 qpair failed and we were unable to recover it. 00:34:41.437 [2024-07-26 23:04:33.644803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.437 [2024-07-26 23:04:33.644829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.437 qpair failed and we were unable to recover it. 00:34:41.437 [2024-07-26 23:04:33.644959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.437 [2024-07-26 23:04:33.644984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.437 qpair failed and we were unable to recover it. 00:34:41.437 [2024-07-26 23:04:33.645177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.437 [2024-07-26 23:04:33.645203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.437 qpair failed and we were unable to recover it. 00:34:41.437 [2024-07-26 23:04:33.645351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.437 [2024-07-26 23:04:33.645376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.437 qpair failed and we were unable to recover it. 00:34:41.437 [2024-07-26 23:04:33.645571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.437 [2024-07-26 23:04:33.645598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.437 qpair failed and we were unable to recover it. 00:34:41.437 [2024-07-26 23:04:33.645768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.437 [2024-07-26 23:04:33.645793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.437 qpair failed and we were unable to recover it. 00:34:41.437 [2024-07-26 23:04:33.645940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.437 [2024-07-26 23:04:33.645967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.437 qpair failed and we were unable to recover it. 00:34:41.437 [2024-07-26 23:04:33.646139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.437 [2024-07-26 23:04:33.646165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.437 qpair failed and we were unable to recover it. 00:34:41.437 [2024-07-26 23:04:33.646303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.437 [2024-07-26 23:04:33.646328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.437 qpair failed and we were unable to recover it. 00:34:41.437 [2024-07-26 23:04:33.646494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.437 [2024-07-26 23:04:33.646520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.437 qpair failed and we were unable to recover it. 00:34:41.437 [2024-07-26 23:04:33.646664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.437 [2024-07-26 23:04:33.646692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.437 qpair failed and we were unable to recover it. 00:34:41.437 [2024-07-26 23:04:33.646867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.437 [2024-07-26 23:04:33.646892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.437 qpair failed and we were unable to recover it. 00:34:41.437 [2024-07-26 23:04:33.647070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.437 [2024-07-26 23:04:33.647097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.437 qpair failed and we were unable to recover it. 00:34:41.437 [2024-07-26 23:04:33.647272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.437 [2024-07-26 23:04:33.647299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.437 qpair failed and we were unable to recover it. 00:34:41.437 [2024-07-26 23:04:33.647472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.437 [2024-07-26 23:04:33.647498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.437 qpair failed and we were unable to recover it. 00:34:41.437 [2024-07-26 23:04:33.647644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.437 [2024-07-26 23:04:33.647670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.437 qpair failed and we were unable to recover it. 00:34:41.437 [2024-07-26 23:04:33.647847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.437 [2024-07-26 23:04:33.647872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.437 qpair failed and we were unable to recover it. 00:34:41.437 [2024-07-26 23:04:33.648048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.437 [2024-07-26 23:04:33.648083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.437 qpair failed and we were unable to recover it. 00:34:41.437 [2024-07-26 23:04:33.648249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.437 [2024-07-26 23:04:33.648274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.437 qpair failed and we were unable to recover it. 00:34:41.437 [2024-07-26 23:04:33.648441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.437 [2024-07-26 23:04:33.648466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.437 qpair failed and we were unable to recover it. 00:34:41.437 [2024-07-26 23:04:33.648632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.437 [2024-07-26 23:04:33.648657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.437 qpair failed and we were unable to recover it. 00:34:41.437 [2024-07-26 23:04:33.648826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.437 [2024-07-26 23:04:33.648851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.437 qpair failed and we were unable to recover it. 00:34:41.437 [2024-07-26 23:04:33.648993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.437 [2024-07-26 23:04:33.649018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.437 qpair failed and we were unable to recover it. 00:34:41.437 [2024-07-26 23:04:33.649217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.437 [2024-07-26 23:04:33.649243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.437 qpair failed and we were unable to recover it. 00:34:41.437 [2024-07-26 23:04:33.649407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.437 [2024-07-26 23:04:33.649432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.437 qpair failed and we were unable to recover it. 00:34:41.437 [2024-07-26 23:04:33.649602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.437 [2024-07-26 23:04:33.649628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.437 qpair failed and we were unable to recover it. 00:34:41.437 [2024-07-26 23:04:33.649775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.437 [2024-07-26 23:04:33.649800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.437 qpair failed and we were unable to recover it. 00:34:41.437 [2024-07-26 23:04:33.650004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.437 [2024-07-26 23:04:33.650030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.437 qpair failed and we were unable to recover it. 00:34:41.437 [2024-07-26 23:04:33.650209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.650235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.438 qpair failed and we were unable to recover it. 00:34:41.438 [2024-07-26 23:04:33.650380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.650407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.438 qpair failed and we were unable to recover it. 00:34:41.438 [2024-07-26 23:04:33.650543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.650568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.438 qpair failed and we were unable to recover it. 00:34:41.438 [2024-07-26 23:04:33.650744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.650770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.438 qpair failed and we were unable to recover it. 00:34:41.438 [2024-07-26 23:04:33.650940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.650966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.438 qpair failed and we were unable to recover it. 00:34:41.438 [2024-07-26 23:04:33.651146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.651172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.438 qpair failed and we were unable to recover it. 00:34:41.438 [2024-07-26 23:04:33.651348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.651374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.438 qpair failed and we were unable to recover it. 00:34:41.438 [2024-07-26 23:04:33.651512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.651537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.438 qpair failed and we were unable to recover it. 00:34:41.438 [2024-07-26 23:04:33.651730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.651755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.438 qpair failed and we were unable to recover it. 00:34:41.438 [2024-07-26 23:04:33.651951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.651977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.438 qpair failed and we were unable to recover it. 00:34:41.438 [2024-07-26 23:04:33.652129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.652155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.438 qpair failed and we were unable to recover it. 00:34:41.438 [2024-07-26 23:04:33.652331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.652356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.438 qpair failed and we were unable to recover it. 00:34:41.438 [2024-07-26 23:04:33.652519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.652545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.438 qpair failed and we were unable to recover it. 00:34:41.438 [2024-07-26 23:04:33.652719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.652744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.438 qpair failed and we were unable to recover it. 00:34:41.438 [2024-07-26 23:04:33.652927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.652953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.438 qpair failed and we were unable to recover it. 00:34:41.438 [2024-07-26 23:04:33.653144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.653170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.438 qpair failed and we were unable to recover it. 00:34:41.438 [2024-07-26 23:04:33.653336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.653362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.438 qpair failed and we were unable to recover it. 00:34:41.438 [2024-07-26 23:04:33.653559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.653586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.438 qpair failed and we were unable to recover it. 00:34:41.438 [2024-07-26 23:04:33.653758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.653784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.438 qpair failed and we were unable to recover it. 00:34:41.438 [2024-07-26 23:04:33.653978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.654003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.438 qpair failed and we were unable to recover it. 00:34:41.438 [2024-07-26 23:04:33.654149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.654175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.438 qpair failed and we were unable to recover it. 00:34:41.438 [2024-07-26 23:04:33.654311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.654337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.438 qpair failed and we were unable to recover it. 00:34:41.438 [2024-07-26 23:04:33.654508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.654534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.438 qpair failed and we were unable to recover it. 00:34:41.438 [2024-07-26 23:04:33.654708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.654733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.438 qpair failed and we were unable to recover it. 00:34:41.438 [2024-07-26 23:04:33.654879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.654905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.438 qpair failed and we were unable to recover it. 00:34:41.438 [2024-07-26 23:04:33.655108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.655134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.438 qpair failed and we were unable to recover it. 00:34:41.438 [2024-07-26 23:04:33.655270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.655299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.438 qpair failed and we were unable to recover it. 00:34:41.438 [2024-07-26 23:04:33.655468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.655493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.438 qpair failed and we were unable to recover it. 00:34:41.438 [2024-07-26 23:04:33.655640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.655664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.438 qpair failed and we were unable to recover it. 00:34:41.438 [2024-07-26 23:04:33.655845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.655871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.438 qpair failed and we were unable to recover it. 00:34:41.438 [2024-07-26 23:04:33.656038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.656071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.438 qpair failed and we were unable to recover it. 00:34:41.438 [2024-07-26 23:04:33.656225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.656251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.438 qpair failed and we were unable to recover it. 00:34:41.438 [2024-07-26 23:04:33.656433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.656458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.438 qpair failed and we were unable to recover it. 00:34:41.438 [2024-07-26 23:04:33.656624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.656650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.438 qpair failed and we were unable to recover it. 00:34:41.438 [2024-07-26 23:04:33.656819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.656844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.438 qpair failed and we were unable to recover it. 00:34:41.438 [2024-07-26 23:04:33.657006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.657032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.438 qpair failed and we were unable to recover it. 00:34:41.438 [2024-07-26 23:04:33.657206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.657232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.438 qpair failed and we were unable to recover it. 00:34:41.438 [2024-07-26 23:04:33.657429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.438 [2024-07-26 23:04:33.657454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.657595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.657620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.657788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.657813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.657986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.658013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.658192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.658218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.658370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.658395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.658568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.658593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.658735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.658760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.658954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.658979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.659138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.659165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.659309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.659335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.659506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.659532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.659704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.659730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.659898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.659924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.660088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.660114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.660285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.660310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.660483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.660515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.660684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.660709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.660880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.660905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.661087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.661114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.661263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.661288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.661457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.661482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.661626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.661651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.661822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.661847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.662014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.662039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.662188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.662217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.662366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.662394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.662563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.662588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.662783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.662807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.662974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.662999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.663172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.663198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.663364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.663389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.663527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.663552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.663707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.663732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.663887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.663912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.664082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.664108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.664258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.664285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.664429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.664454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.439 [2024-07-26 23:04:33.664617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.439 [2024-07-26 23:04:33.664642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.439 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.664806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.440 [2024-07-26 23:04:33.664831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.440 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.664975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.440 [2024-07-26 23:04:33.665001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.440 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.665154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.440 [2024-07-26 23:04:33.665180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.440 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.665329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.440 [2024-07-26 23:04:33.665354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.440 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.665529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.440 [2024-07-26 23:04:33.665559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.440 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.665731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.440 [2024-07-26 23:04:33.665757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.440 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.665905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.440 [2024-07-26 23:04:33.665930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.440 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.666080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.440 [2024-07-26 23:04:33.666105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.440 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.666289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.440 [2024-07-26 23:04:33.666315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.440 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.666458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.440 [2024-07-26 23:04:33.666484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.440 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.666621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.440 [2024-07-26 23:04:33.666647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.440 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.666790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.440 [2024-07-26 23:04:33.666815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.440 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.667007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.440 [2024-07-26 23:04:33.667032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.440 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.667180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.440 [2024-07-26 23:04:33.667205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.440 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.667346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.440 [2024-07-26 23:04:33.667376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.440 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.667555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.440 [2024-07-26 23:04:33.667580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.440 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.667756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.440 [2024-07-26 23:04:33.667782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.440 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.667951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.440 [2024-07-26 23:04:33.667975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.440 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.668184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.440 [2024-07-26 23:04:33.668210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.440 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.668379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.440 [2024-07-26 23:04:33.668404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.440 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.668579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.440 [2024-07-26 23:04:33.668604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.440 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.668799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.440 [2024-07-26 23:04:33.668831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.440 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.668984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.440 [2024-07-26 23:04:33.669009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.440 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.669177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.440 [2024-07-26 23:04:33.669202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.440 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.669348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.440 [2024-07-26 23:04:33.669373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.440 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.669545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.440 [2024-07-26 23:04:33.669571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.440 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.669714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.440 [2024-07-26 23:04:33.669739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.440 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.669934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.440 [2024-07-26 23:04:33.669959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.440 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.670137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.440 [2024-07-26 23:04:33.670162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.440 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.670299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.440 [2024-07-26 23:04:33.670324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.440 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.670470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.440 [2024-07-26 23:04:33.670495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.440 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.670682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.440 [2024-07-26 23:04:33.670707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.440 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.670886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.440 [2024-07-26 23:04:33.670912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.440 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.671052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.440 [2024-07-26 23:04:33.671084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.440 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.671232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.440 [2024-07-26 23:04:33.671257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.440 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.671434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.440 [2024-07-26 23:04:33.671459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.440 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.671660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.440 [2024-07-26 23:04:33.671685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.440 qpair failed and we were unable to recover it. 00:34:41.440 [2024-07-26 23:04:33.671849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.441 [2024-07-26 23:04:33.671874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.441 qpair failed and we were unable to recover it. 00:34:41.441 [2024-07-26 23:04:33.672045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.441 [2024-07-26 23:04:33.672078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.441 qpair failed and we were unable to recover it. 00:34:41.441 [2024-07-26 23:04:33.672217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.441 [2024-07-26 23:04:33.672242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.441 qpair failed and we were unable to recover it. 00:34:41.441 [2024-07-26 23:04:33.672400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.441 [2024-07-26 23:04:33.672425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.441 qpair failed and we were unable to recover it. 00:34:41.441 [2024-07-26 23:04:33.672597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.441 [2024-07-26 23:04:33.672622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.441 qpair failed and we were unable to recover it. 00:34:41.441 [2024-07-26 23:04:33.672763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.441 [2024-07-26 23:04:33.672787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.441 qpair failed and we were unable to recover it. 00:34:41.441 [2024-07-26 23:04:33.672959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.441 [2024-07-26 23:04:33.672984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.441 qpair failed and we were unable to recover it. 00:34:41.441 [2024-07-26 23:04:33.673163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.441 [2024-07-26 23:04:33.673189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.441 qpair failed and we were unable to recover it. 00:34:41.441 [2024-07-26 23:04:33.673360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.441 [2024-07-26 23:04:33.673389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.441 qpair failed and we were unable to recover it. 00:34:41.441 [2024-07-26 23:04:33.673557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.441 [2024-07-26 23:04:33.673582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.441 qpair failed and we were unable to recover it. 00:34:41.441 [2024-07-26 23:04:33.673727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.441 [2024-07-26 23:04:33.673752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.441 qpair failed and we were unable to recover it. 00:34:41.441 [2024-07-26 23:04:33.673918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.441 [2024-07-26 23:04:33.673943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.441 qpair failed and we were unable to recover it. 00:34:41.441 [2024-07-26 23:04:33.674115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.441 [2024-07-26 23:04:33.674141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.441 qpair failed and we were unable to recover it. 00:34:41.441 [2024-07-26 23:04:33.674279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.441 [2024-07-26 23:04:33.674304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.441 qpair failed and we were unable to recover it. 00:34:41.441 [2024-07-26 23:04:33.674479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.441 [2024-07-26 23:04:33.674504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.441 qpair failed and we were unable to recover it. 00:34:41.441 [2024-07-26 23:04:33.674669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.441 [2024-07-26 23:04:33.674694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.441 qpair failed and we were unable to recover it. 00:34:41.441 [2024-07-26 23:04:33.674837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.441 [2024-07-26 23:04:33.674861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.441 qpair failed and we were unable to recover it. 00:34:41.441 [2024-07-26 23:04:33.675071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.441 [2024-07-26 23:04:33.675096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.441 qpair failed and we were unable to recover it. 00:34:41.441 [2024-07-26 23:04:33.675266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.441 [2024-07-26 23:04:33.675292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.441 qpair failed and we were unable to recover it. 00:34:41.441 [2024-07-26 23:04:33.675486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.441 [2024-07-26 23:04:33.675511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.441 qpair failed and we were unable to recover it. 00:34:41.441 [2024-07-26 23:04:33.675676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.441 [2024-07-26 23:04:33.675701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.441 qpair failed and we were unable to recover it. 00:34:41.441 [2024-07-26 23:04:33.675897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.441 [2024-07-26 23:04:33.675922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.441 qpair failed and we were unable to recover it. 00:34:41.441 [2024-07-26 23:04:33.676096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.441 [2024-07-26 23:04:33.676123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.441 qpair failed and we were unable to recover it. 00:34:41.441 [2024-07-26 23:04:33.676289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.441 [2024-07-26 23:04:33.676314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.441 qpair failed and we were unable to recover it. 00:34:41.441 [2024-07-26 23:04:33.676480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.441 [2024-07-26 23:04:33.676505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.441 qpair failed and we were unable to recover it. 00:34:41.441 [2024-07-26 23:04:33.676675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.441 [2024-07-26 23:04:33.676700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.441 qpair failed and we were unable to recover it. 00:34:41.441 [2024-07-26 23:04:33.676867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.441 [2024-07-26 23:04:33.676892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.441 qpair failed and we were unable to recover it. 00:34:41.441 [2024-07-26 23:04:33.677068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.441 [2024-07-26 23:04:33.677094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.441 qpair failed and we were unable to recover it. 00:34:41.441 [2024-07-26 23:04:33.677278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.441 [2024-07-26 23:04:33.677303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.441 qpair failed and we were unable to recover it. 00:34:41.441 [2024-07-26 23:04:33.677499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.441 [2024-07-26 23:04:33.677524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.441 qpair failed and we were unable to recover it. 00:34:41.441 [2024-07-26 23:04:33.677697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.441 [2024-07-26 23:04:33.677722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.441 qpair failed and we were unable to recover it. 00:34:41.441 [2024-07-26 23:04:33.677915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.441 [2024-07-26 23:04:33.677940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.441 qpair failed and we were unable to recover it. 00:34:41.441 [2024-07-26 23:04:33.678111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.441 [2024-07-26 23:04:33.678137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.441 qpair failed and we were unable to recover it. 00:34:41.441 [2024-07-26 23:04:33.678287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.441 [2024-07-26 23:04:33.678313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.441 qpair failed and we were unable to recover it. 00:34:41.441 [2024-07-26 23:04:33.678486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.441 [2024-07-26 23:04:33.678511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.441 qpair failed and we were unable to recover it. 00:34:41.441 [2024-07-26 23:04:33.678657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.441 [2024-07-26 23:04:33.678686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.441 qpair failed and we were unable to recover it. 00:34:41.441 [2024-07-26 23:04:33.678834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.441 [2024-07-26 23:04:33.678859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.441 qpair failed and we were unable to recover it. 00:34:41.441 [2024-07-26 23:04:33.679005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.442 [2024-07-26 23:04:33.679030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.442 qpair failed and we were unable to recover it. 00:34:41.442 [2024-07-26 23:04:33.679215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.442 [2024-07-26 23:04:33.679241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.442 qpair failed and we were unable to recover it. 00:34:41.442 [2024-07-26 23:04:33.679408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.442 [2024-07-26 23:04:33.679433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.442 qpair failed and we were unable to recover it. 00:34:41.442 [2024-07-26 23:04:33.679612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.442 [2024-07-26 23:04:33.679638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.442 qpair failed and we were unable to recover it. 00:34:41.442 [2024-07-26 23:04:33.679834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.442 [2024-07-26 23:04:33.679859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.442 qpair failed and we were unable to recover it. 00:34:41.442 [2024-07-26 23:04:33.680039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.442 [2024-07-26 23:04:33.680071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.442 qpair failed and we were unable to recover it. 00:34:41.442 [2024-07-26 23:04:33.680268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.442 [2024-07-26 23:04:33.680293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.442 qpair failed and we were unable to recover it. 00:34:41.442 [2024-07-26 23:04:33.680464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.442 [2024-07-26 23:04:33.680489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.442 qpair failed and we were unable to recover it. 00:34:41.442 [2024-07-26 23:04:33.680653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.442 [2024-07-26 23:04:33.680679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.442 qpair failed and we were unable to recover it. 00:34:41.442 [2024-07-26 23:04:33.680848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.442 [2024-07-26 23:04:33.680873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.442 qpair failed and we were unable to recover it. 00:34:41.442 [2024-07-26 23:04:33.681017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.442 [2024-07-26 23:04:33.681042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.442 qpair failed and we were unable to recover it. 00:34:41.442 [2024-07-26 23:04:33.681225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.442 [2024-07-26 23:04:33.681250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.442 qpair failed and we were unable to recover it. 00:34:41.442 [2024-07-26 23:04:33.681421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.442 [2024-07-26 23:04:33.681446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.442 qpair failed and we were unable to recover it. 00:34:41.442 [2024-07-26 23:04:33.681639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.442 [2024-07-26 23:04:33.681664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.442 qpair failed and we were unable to recover it. 00:34:41.442 [2024-07-26 23:04:33.681827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.442 [2024-07-26 23:04:33.681852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.442 qpair failed and we were unable to recover it. 00:34:41.442 [2024-07-26 23:04:33.682016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.442 [2024-07-26 23:04:33.682041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.442 qpair failed and we were unable to recover it. 00:34:41.442 [2024-07-26 23:04:33.682239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.442 [2024-07-26 23:04:33.682264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.442 qpair failed and we were unable to recover it. 00:34:41.442 [2024-07-26 23:04:33.682457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.442 [2024-07-26 23:04:33.682482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.442 qpair failed and we were unable to recover it. 00:34:41.442 [2024-07-26 23:04:33.682622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.442 [2024-07-26 23:04:33.682648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.442 qpair failed and we were unable to recover it. 00:34:41.442 [2024-07-26 23:04:33.682848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.442 [2024-07-26 23:04:33.682873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.442 qpair failed and we were unable to recover it. 00:34:41.442 [2024-07-26 23:04:33.683049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.442 [2024-07-26 23:04:33.683082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.442 qpair failed and we were unable to recover it. 00:34:41.442 [2024-07-26 23:04:33.683263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.442 [2024-07-26 23:04:33.683290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.442 qpair failed and we were unable to recover it. 00:34:41.442 [2024-07-26 23:04:33.683490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.442 [2024-07-26 23:04:33.683515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.442 qpair failed and we were unable to recover it. 00:34:41.442 [2024-07-26 23:04:33.683682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.442 [2024-07-26 23:04:33.683707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.442 qpair failed and we were unable to recover it. 00:34:41.442 [2024-07-26 23:04:33.683842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.442 [2024-07-26 23:04:33.683867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.442 qpair failed and we were unable to recover it. 00:34:41.442 [2024-07-26 23:04:33.684036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.442 [2024-07-26 23:04:33.684074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.442 qpair failed and we were unable to recover it. 00:34:41.442 [2024-07-26 23:04:33.684273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.442 [2024-07-26 23:04:33.684299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.442 qpair failed and we were unable to recover it. 00:34:41.442 [2024-07-26 23:04:33.684481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.442 [2024-07-26 23:04:33.684506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.442 qpair failed and we were unable to recover it. 00:34:41.442 [2024-07-26 23:04:33.684658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.442 [2024-07-26 23:04:33.684684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.442 qpair failed and we were unable to recover it. 00:34:41.442 [2024-07-26 23:04:33.684847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.442 [2024-07-26 23:04:33.684873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.442 qpair failed and we were unable to recover it. 00:34:41.442 [2024-07-26 23:04:33.685049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.442 [2024-07-26 23:04:33.685083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.442 qpair failed and we were unable to recover it. 00:34:41.442 [2024-07-26 23:04:33.685260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.442 [2024-07-26 23:04:33.685285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.442 qpair failed and we were unable to recover it. 00:34:41.442 [2024-07-26 23:04:33.685481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.685506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.443 [2024-07-26 23:04:33.685654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.685679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.443 [2024-07-26 23:04:33.685820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.685844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.443 [2024-07-26 23:04:33.685984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.686009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.443 [2024-07-26 23:04:33.686159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.686185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.443 [2024-07-26 23:04:33.686332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.686357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.443 [2024-07-26 23:04:33.686552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.686576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.443 [2024-07-26 23:04:33.686778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.686803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.443 [2024-07-26 23:04:33.686954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.686979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.443 [2024-07-26 23:04:33.687147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.687173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.443 [2024-07-26 23:04:33.687320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.687345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.443 [2024-07-26 23:04:33.687526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.687551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.443 [2024-07-26 23:04:33.687731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.687756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.443 [2024-07-26 23:04:33.687952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.687977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.443 [2024-07-26 23:04:33.688176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.688202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.443 [2024-07-26 23:04:33.688370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.688395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.443 [2024-07-26 23:04:33.688536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.688561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.443 [2024-07-26 23:04:33.688733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.688757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.443 [2024-07-26 23:04:33.688893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.688918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.443 [2024-07-26 23:04:33.689081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.689107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.443 [2024-07-26 23:04:33.689283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.689308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.443 [2024-07-26 23:04:33.689451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.689477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.443 [2024-07-26 23:04:33.689675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.689701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.443 [2024-07-26 23:04:33.689883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.689908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.443 [2024-07-26 23:04:33.690107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.690134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.443 [2024-07-26 23:04:33.690276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.690301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.443 [2024-07-26 23:04:33.690445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.690470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.443 [2024-07-26 23:04:33.690606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.690631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.443 [2024-07-26 23:04:33.690786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.690811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.443 [2024-07-26 23:04:33.690958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.690984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.443 [2024-07-26 23:04:33.691153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.691179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.443 [2024-07-26 23:04:33.691375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.691400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.443 [2024-07-26 23:04:33.691570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.691595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.443 [2024-07-26 23:04:33.691734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.691759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.443 [2024-07-26 23:04:33.691909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.691934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.443 [2024-07-26 23:04:33.692101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.692128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.443 [2024-07-26 23:04:33.692325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.692350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.443 [2024-07-26 23:04:33.692490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.443 [2024-07-26 23:04:33.692515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.443 qpair failed and we were unable to recover it. 00:34:41.444 [2024-07-26 23:04:33.692685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.444 [2024-07-26 23:04:33.692710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.444 qpair failed and we were unable to recover it. 00:34:41.444 [2024-07-26 23:04:33.692909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.444 [2024-07-26 23:04:33.692933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.444 qpair failed and we were unable to recover it. 00:34:41.444 [2024-07-26 23:04:33.693069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.444 [2024-07-26 23:04:33.693095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.444 qpair failed and we were unable to recover it. 00:34:41.444 [2024-07-26 23:04:33.693292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.444 [2024-07-26 23:04:33.693318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.444 qpair failed and we were unable to recover it. 00:34:41.444 [2024-07-26 23:04:33.693488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.444 [2024-07-26 23:04:33.693513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.444 qpair failed and we were unable to recover it. 00:34:41.444 [2024-07-26 23:04:33.693657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.444 [2024-07-26 23:04:33.693682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.444 qpair failed and we were unable to recover it. 00:34:41.444 [2024-07-26 23:04:33.693855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.444 [2024-07-26 23:04:33.693881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.444 qpair failed and we were unable to recover it. 00:34:41.444 [2024-07-26 23:04:33.694019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.444 [2024-07-26 23:04:33.694046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.444 qpair failed and we were unable to recover it. 00:34:41.444 [2024-07-26 23:04:33.694209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.444 [2024-07-26 23:04:33.694235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.444 qpair failed and we were unable to recover it. 00:34:41.444 [2024-07-26 23:04:33.694415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.444 [2024-07-26 23:04:33.694440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.444 qpair failed and we were unable to recover it. 00:34:41.444 [2024-07-26 23:04:33.694614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.444 [2024-07-26 23:04:33.694639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.444 qpair failed and we were unable to recover it. 00:34:41.444 [2024-07-26 23:04:33.694813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.444 [2024-07-26 23:04:33.694839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.444 qpair failed and we were unable to recover it. 00:34:41.444 [2024-07-26 23:04:33.695000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.444 [2024-07-26 23:04:33.695025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.444 qpair failed and we were unable to recover it. 00:34:41.444 [2024-07-26 23:04:33.695212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.444 [2024-07-26 23:04:33.695238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.444 qpair failed and we were unable to recover it. 00:34:41.444 [2024-07-26 23:04:33.695375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.444 [2024-07-26 23:04:33.695401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.444 qpair failed and we were unable to recover it. 00:34:41.444 [2024-07-26 23:04:33.695597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.444 [2024-07-26 23:04:33.695622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.444 qpair failed and we were unable to recover it. 00:34:41.444 [2024-07-26 23:04:33.695817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.444 [2024-07-26 23:04:33.695842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.444 qpair failed and we were unable to recover it. 00:34:41.444 [2024-07-26 23:04:33.695988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.444 [2024-07-26 23:04:33.696013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.444 qpair failed and we were unable to recover it. 00:34:41.444 [2024-07-26 23:04:33.696176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.444 [2024-07-26 23:04:33.696202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.444 qpair failed and we were unable to recover it. 00:34:41.444 [2024-07-26 23:04:33.696388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.444 [2024-07-26 23:04:33.696413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.444 qpair failed and we were unable to recover it. 00:34:41.444 [2024-07-26 23:04:33.696582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.444 [2024-07-26 23:04:33.696607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.444 qpair failed and we were unable to recover it. 00:34:41.444 [2024-07-26 23:04:33.696777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.444 [2024-07-26 23:04:33.696802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.444 qpair failed and we were unable to recover it. 00:34:41.444 [2024-07-26 23:04:33.697000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.444 [2024-07-26 23:04:33.697025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.444 qpair failed and we were unable to recover it. 00:34:41.444 [2024-07-26 23:04:33.697200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.444 [2024-07-26 23:04:33.697230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.444 qpair failed and we were unable to recover it. 00:34:41.444 [2024-07-26 23:04:33.697371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.444 [2024-07-26 23:04:33.697396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.444 qpair failed and we were unable to recover it. 00:34:41.444 [2024-07-26 23:04:33.697593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.444 [2024-07-26 23:04:33.697618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.444 qpair failed and we were unable to recover it. 00:34:41.444 [2024-07-26 23:04:33.697785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.444 [2024-07-26 23:04:33.697809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.444 qpair failed and we were unable to recover it. 00:34:41.444 [2024-07-26 23:04:33.698002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.444 [2024-07-26 23:04:33.698028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.444 qpair failed and we were unable to recover it. 00:34:41.444 [2024-07-26 23:04:33.698214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.444 [2024-07-26 23:04:33.698240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.444 qpair failed and we were unable to recover it. 00:34:41.444 [2024-07-26 23:04:33.698388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.444 [2024-07-26 23:04:33.698413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.444 qpair failed and we were unable to recover it. 00:34:41.444 [2024-07-26 23:04:33.698548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.444 [2024-07-26 23:04:33.698573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.444 qpair failed and we were unable to recover it. 00:34:41.444 [2024-07-26 23:04:33.698752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.444 [2024-07-26 23:04:33.698777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.444 qpair failed and we were unable to recover it. 00:34:41.444 [2024-07-26 23:04:33.698921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.444 [2024-07-26 23:04:33.698946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.444 qpair failed and we were unable to recover it. 00:34:41.444 [2024-07-26 23:04:33.699147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.444 [2024-07-26 23:04:33.699173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.444 qpair failed and we were unable to recover it. 00:34:41.444 [2024-07-26 23:04:33.699338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.444 [2024-07-26 23:04:33.699363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.444 qpair failed and we were unable to recover it. 00:34:41.444 [2024-07-26 23:04:33.699531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.444 [2024-07-26 23:04:33.699556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.444 qpair failed and we were unable to recover it. 00:34:41.444 [2024-07-26 23:04:33.699688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.444 [2024-07-26 23:04:33.699713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.444 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.699889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.445 [2024-07-26 23:04:33.699914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.445 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.700065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.445 [2024-07-26 23:04:33.700090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.445 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.700258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.445 [2024-07-26 23:04:33.700283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.445 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.700433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.445 [2024-07-26 23:04:33.700458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.445 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.700622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.445 [2024-07-26 23:04:33.700647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.445 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.700814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.445 [2024-07-26 23:04:33.700839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.445 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.700982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.445 [2024-07-26 23:04:33.701008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.445 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.701192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.445 [2024-07-26 23:04:33.701217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.445 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.701395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.445 [2024-07-26 23:04:33.701420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.445 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.701558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.445 [2024-07-26 23:04:33.701584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.445 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.701727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.445 [2024-07-26 23:04:33.701752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.445 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.701919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.445 [2024-07-26 23:04:33.701944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.445 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.702138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.445 [2024-07-26 23:04:33.702163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.445 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.702312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.445 [2024-07-26 23:04:33.702341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.445 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.702511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.445 [2024-07-26 23:04:33.702536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.445 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.702671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.445 [2024-07-26 23:04:33.702695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.445 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.702858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.445 [2024-07-26 23:04:33.702883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.445 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.703026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.445 [2024-07-26 23:04:33.703051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.445 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.703239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.445 [2024-07-26 23:04:33.703264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.445 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.703460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.445 [2024-07-26 23:04:33.703485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.445 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.703617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.445 [2024-07-26 23:04:33.703642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.445 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.703805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.445 [2024-07-26 23:04:33.703830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.445 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.703966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.445 [2024-07-26 23:04:33.703991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.445 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.704161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.445 [2024-07-26 23:04:33.704187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.445 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.704357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.445 [2024-07-26 23:04:33.704382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.445 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.704575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.445 [2024-07-26 23:04:33.704601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.445 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.704771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.445 [2024-07-26 23:04:33.704797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.445 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.704939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.445 [2024-07-26 23:04:33.704964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.445 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.705133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.445 [2024-07-26 23:04:33.705158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.445 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.705335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.445 [2024-07-26 23:04:33.705360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.445 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.705528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.445 [2024-07-26 23:04:33.705553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.445 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.705752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.445 [2024-07-26 23:04:33.705777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.445 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.705954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.445 [2024-07-26 23:04:33.705979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.445 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.706122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.445 [2024-07-26 23:04:33.706148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.445 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.706325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.445 [2024-07-26 23:04:33.706352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.445 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.706494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.445 [2024-07-26 23:04:33.706519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.445 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.706665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.445 [2024-07-26 23:04:33.706690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.445 qpair failed and we were unable to recover it. 00:34:41.445 [2024-07-26 23:04:33.706830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.706855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.446 [2024-07-26 23:04:33.707020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.707045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.446 [2024-07-26 23:04:33.707257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.707284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.446 [2024-07-26 23:04:33.707477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.707506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.446 [2024-07-26 23:04:33.707679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.707704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.446 [2024-07-26 23:04:33.707880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.707906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.446 [2024-07-26 23:04:33.708075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.708102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.446 [2024-07-26 23:04:33.708274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.708300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.446 [2024-07-26 23:04:33.708450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.708475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.446 [2024-07-26 23:04:33.708622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.708648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.446 [2024-07-26 23:04:33.708820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.708846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.446 [2024-07-26 23:04:33.709019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.709044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.446 [2024-07-26 23:04:33.709196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.709221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.446 [2024-07-26 23:04:33.709387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.709412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.446 [2024-07-26 23:04:33.709579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.709604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.446 [2024-07-26 23:04:33.709773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.709798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.446 [2024-07-26 23:04:33.709998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.710023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.446 [2024-07-26 23:04:33.710206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.710233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.446 [2024-07-26 23:04:33.710427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.710451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.446 [2024-07-26 23:04:33.710627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.710651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.446 [2024-07-26 23:04:33.710795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.710820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.446 [2024-07-26 23:04:33.710987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.711012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.446 [2024-07-26 23:04:33.711192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.711219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.446 [2024-07-26 23:04:33.711417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.711443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.446 [2024-07-26 23:04:33.711581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.711606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.446 [2024-07-26 23:04:33.711745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.711770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.446 [2024-07-26 23:04:33.711965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.711990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.446 [2024-07-26 23:04:33.712189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.712216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.446 [2024-07-26 23:04:33.712359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.712385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.446 [2024-07-26 23:04:33.712532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.712557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.446 [2024-07-26 23:04:33.712725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.712751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.446 [2024-07-26 23:04:33.712921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.712946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.446 [2024-07-26 23:04:33.713116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.713141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.446 [2024-07-26 23:04:33.713317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.713342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.446 [2024-07-26 23:04:33.713508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.713533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.446 [2024-07-26 23:04:33.713704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.713729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.446 [2024-07-26 23:04:33.713894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.713919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.446 [2024-07-26 23:04:33.714098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.446 [2024-07-26 23:04:33.714124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.446 qpair failed and we were unable to recover it. 00:34:41.447 [2024-07-26 23:04:33.714329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.447 [2024-07-26 23:04:33.714354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.447 qpair failed and we were unable to recover it. 00:34:41.447 [2024-07-26 23:04:33.714496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.447 [2024-07-26 23:04:33.714522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.447 qpair failed and we were unable to recover it. 00:34:41.447 [2024-07-26 23:04:33.714692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.447 [2024-07-26 23:04:33.714717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.447 qpair failed and we were unable to recover it. 00:34:41.447 [2024-07-26 23:04:33.714864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.447 [2024-07-26 23:04:33.714889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.447 qpair failed and we were unable to recover it. 00:34:41.447 [2024-07-26 23:04:33.715030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.447 [2024-07-26 23:04:33.715056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.447 qpair failed and we were unable to recover it. 00:34:41.447 [2024-07-26 23:04:33.715240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.447 [2024-07-26 23:04:33.715265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.447 qpair failed and we were unable to recover it. 00:34:41.447 [2024-07-26 23:04:33.715413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.447 [2024-07-26 23:04:33.715444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.447 qpair failed and we were unable to recover it. 00:34:41.447 [2024-07-26 23:04:33.715592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.447 [2024-07-26 23:04:33.715617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.447 qpair failed and we were unable to recover it. 00:34:41.447 [2024-07-26 23:04:33.715812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.447 [2024-07-26 23:04:33.715837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.447 qpair failed and we were unable to recover it. 00:34:41.447 [2024-07-26 23:04:33.716009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.447 [2024-07-26 23:04:33.716034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.447 qpair failed and we were unable to recover it. 00:34:41.447 [2024-07-26 23:04:33.716208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.447 [2024-07-26 23:04:33.716234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.447 qpair failed and we were unable to recover it. 00:34:41.447 [2024-07-26 23:04:33.716400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.447 [2024-07-26 23:04:33.716425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.447 qpair failed and we were unable to recover it. 00:34:41.447 [2024-07-26 23:04:33.716595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.447 [2024-07-26 23:04:33.716621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.447 qpair failed and we were unable to recover it. 00:34:41.447 [2024-07-26 23:04:33.716820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.447 [2024-07-26 23:04:33.716845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.447 qpair failed and we were unable to recover it. 00:34:41.447 [2024-07-26 23:04:33.717024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.447 [2024-07-26 23:04:33.717050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.447 qpair failed and we were unable to recover it. 00:34:41.447 [2024-07-26 23:04:33.717229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.447 [2024-07-26 23:04:33.717255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.447 qpair failed and we were unable to recover it. 00:34:41.447 [2024-07-26 23:04:33.717402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.447 [2024-07-26 23:04:33.717427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.447 qpair failed and we were unable to recover it. 00:34:41.447 [2024-07-26 23:04:33.717597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.447 [2024-07-26 23:04:33.717622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.447 qpair failed and we were unable to recover it. 00:34:41.447 [2024-07-26 23:04:33.717766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.447 [2024-07-26 23:04:33.717792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.447 qpair failed and we were unable to recover it. 00:34:41.447 [2024-07-26 23:04:33.717989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.447 [2024-07-26 23:04:33.718014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.447 qpair failed and we were unable to recover it. 00:34:41.447 [2024-07-26 23:04:33.718174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.447 [2024-07-26 23:04:33.718200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.447 qpair failed and we were unable to recover it. 00:34:41.447 [2024-07-26 23:04:33.718345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.447 [2024-07-26 23:04:33.718370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.447 qpair failed and we were unable to recover it. 00:34:41.447 [2024-07-26 23:04:33.718543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.447 [2024-07-26 23:04:33.718569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.447 qpair failed and we were unable to recover it. 00:34:41.447 [2024-07-26 23:04:33.718743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.447 [2024-07-26 23:04:33.718769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.447 qpair failed and we were unable to recover it. 00:34:41.447 [2024-07-26 23:04:33.718911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.447 [2024-07-26 23:04:33.718936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.447 qpair failed and we were unable to recover it. 00:34:41.447 [2024-07-26 23:04:33.719136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.447 [2024-07-26 23:04:33.719163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.447 qpair failed and we were unable to recover it. 00:34:41.447 [2024-07-26 23:04:33.719332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.447 [2024-07-26 23:04:33.719358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.447 qpair failed and we were unable to recover it. 00:34:41.447 [2024-07-26 23:04:33.719524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.447 [2024-07-26 23:04:33.719549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.447 qpair failed and we were unable to recover it. 00:34:41.447 [2024-07-26 23:04:33.719740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.447 [2024-07-26 23:04:33.719765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.447 qpair failed and we were unable to recover it. 00:34:41.447 [2024-07-26 23:04:33.719950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.447 [2024-07-26 23:04:33.719976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.447 qpair failed and we were unable to recover it. 00:34:41.447 [2024-07-26 23:04:33.720148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.447 [2024-07-26 23:04:33.720174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.447 qpair failed and we were unable to recover it. 00:34:41.447 [2024-07-26 23:04:33.720318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.447 [2024-07-26 23:04:33.720343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.447 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.720535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.720560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.448 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.720726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.720756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.448 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.720899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.720925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.448 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.721099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.721124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.448 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.721275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.721300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.448 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.721466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.721491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.448 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.721661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.721686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.448 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.721856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.721881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.448 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.722051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.722083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.448 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.722282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.722307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.448 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.722476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.722501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.448 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.722664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.722688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.448 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.722830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.722855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.448 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.723021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.723046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.448 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.723238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.723265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.448 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.723409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.723434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.448 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.723601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.723627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.448 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.723768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.723794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.448 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.723933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.723958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.448 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.724095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.724122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.448 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.724266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.724291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.448 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.724463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.724487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.448 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.724654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.724679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.448 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.724872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.724897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.448 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.725039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.725070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.448 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.725215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.725240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.448 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.725370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.725395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.448 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.725542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.725567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.448 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.725730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.725759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.448 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.725904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.725929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.448 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.726101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.726127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.448 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.726272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.726297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.448 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.726492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.726517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.448 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.726684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.726709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.448 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.726847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.726873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.448 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.727016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.727043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.448 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.727231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.727256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.448 qpair failed and we were unable to recover it. 00:34:41.448 [2024-07-26 23:04:33.727426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-07-26 23:04:33.727452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.449 [2024-07-26 23:04:33.727621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.449 [2024-07-26 23:04:33.727646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.449 [2024-07-26 23:04:33.727814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.449 [2024-07-26 23:04:33.727839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.449 [2024-07-26 23:04:33.728005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.449 [2024-07-26 23:04:33.728030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.449 [2024-07-26 23:04:33.728184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.449 [2024-07-26 23:04:33.728210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.449 [2024-07-26 23:04:33.728360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.449 [2024-07-26 23:04:33.728386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.449 [2024-07-26 23:04:33.728538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.449 [2024-07-26 23:04:33.728563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.449 [2024-07-26 23:04:33.728760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.449 [2024-07-26 23:04:33.728785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.449 [2024-07-26 23:04:33.728956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.449 [2024-07-26 23:04:33.728982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.449 [2024-07-26 23:04:33.729141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.449 [2024-07-26 23:04:33.729167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.449 [2024-07-26 23:04:33.729310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.449 [2024-07-26 23:04:33.729335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.449 [2024-07-26 23:04:33.729500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.449 [2024-07-26 23:04:33.729525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.449 [2024-07-26 23:04:33.729695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.449 [2024-07-26 23:04:33.729721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.449 [2024-07-26 23:04:33.729894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.449 [2024-07-26 23:04:33.729920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.449 [2024-07-26 23:04:33.730111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.449 [2024-07-26 23:04:33.730138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.449 [2024-07-26 23:04:33.730270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.449 [2024-07-26 23:04:33.730296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.449 [2024-07-26 23:04:33.730463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.449 [2024-07-26 23:04:33.730488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.449 [2024-07-26 23:04:33.730657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.449 [2024-07-26 23:04:33.730682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.449 [2024-07-26 23:04:33.730876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.449 [2024-07-26 23:04:33.730901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.449 [2024-07-26 23:04:33.731074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.449 [2024-07-26 23:04:33.731101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.449 [2024-07-26 23:04:33.731270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.449 [2024-07-26 23:04:33.731296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.449 [2024-07-26 23:04:33.731437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.449 [2024-07-26 23:04:33.731463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.449 [2024-07-26 23:04:33.731628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.449 [2024-07-26 23:04:33.731653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.449 [2024-07-26 23:04:33.731848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.449 [2024-07-26 23:04:33.731874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.449 [2024-07-26 23:04:33.732044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.449 [2024-07-26 23:04:33.732076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.449 [2024-07-26 23:04:33.732250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.449 [2024-07-26 23:04:33.732277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.449 [2024-07-26 23:04:33.732443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.449 [2024-07-26 23:04:33.732468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.449 [2024-07-26 23:04:33.732638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.449 [2024-07-26 23:04:33.732664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.449 [2024-07-26 23:04:33.732832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.449 [2024-07-26 23:04:33.732858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.449 [2024-07-26 23:04:33.733025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.449 [2024-07-26 23:04:33.733050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.449 [2024-07-26 23:04:33.733227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.449 [2024-07-26 23:04:33.733253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.449 [2024-07-26 23:04:33.733423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.449 [2024-07-26 23:04:33.733448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.449 [2024-07-26 23:04:33.733596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.449 [2024-07-26 23:04:33.733622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.449 [2024-07-26 23:04:33.733797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.449 [2024-07-26 23:04:33.733823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.449 [2024-07-26 23:04:33.733967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.449 [2024-07-26 23:04:33.733992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.449 [2024-07-26 23:04:33.734159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.449 [2024-07-26 23:04:33.734185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.449 [2024-07-26 23:04:33.734326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.449 [2024-07-26 23:04:33.734351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.449 [2024-07-26 23:04:33.734492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.449 [2024-07-26 23:04:33.734517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.449 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-26 23:04:33.734711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-26 23:04:33.734736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-26 23:04:33.734902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-26 23:04:33.734927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-26 23:04:33.735085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-26 23:04:33.735112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-26 23:04:33.735278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-26 23:04:33.735304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-26 23:04:33.735463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-26 23:04:33.735488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-26 23:04:33.735631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-26 23:04:33.735656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-26 23:04:33.735848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-26 23:04:33.735873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-26 23:04:33.736042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-26 23:04:33.736074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-26 23:04:33.736260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-26 23:04:33.736285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-26 23:04:33.736453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-26 23:04:33.736478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-26 23:04:33.736643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-26 23:04:33.736668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-26 23:04:33.736809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-26 23:04:33.736834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-26 23:04:33.736979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-26 23:04:33.737004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-26 23:04:33.737139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-26 23:04:33.737164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-26 23:04:33.737340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-26 23:04:33.737366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-26 23:04:33.737506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-26 23:04:33.737531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-26 23:04:33.737721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-26 23:04:33.737747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-26 23:04:33.737912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-26 23:04:33.737937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-26 23:04:33.738110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-26 23:04:33.738136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-26 23:04:33.738285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-26 23:04:33.738310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-26 23:04:33.738455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-26 23:04:33.738480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-26 23:04:33.738675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-26 23:04:33.738704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-26 23:04:33.738891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-26 23:04:33.738916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-26 23:04:33.739097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-26 23:04:33.739125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-26 23:04:33.739268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-26 23:04:33.739294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-26 23:04:33.739462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-26 23:04:33.739487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-26 23:04:33.739679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-26 23:04:33.739705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-26 23:04:33.739847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-26 23:04:33.739873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-26 23:04:33.740073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-26 23:04:33.740099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-26 23:04:33.740246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-26 23:04:33.740272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-26 23:04:33.740407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-26 23:04:33.740432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-26 23:04:33.740627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-26 23:04:33.740653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-26 23:04:33.740802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-26 23:04:33.740827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-26 23:04:33.741023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-26 23:04:33.741049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-26 23:04:33.741202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-26 23:04:33.741228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-26 23:04:33.741414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-26 23:04:33.741439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-26 23:04:33.741585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-26 23:04:33.741610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-26 23:04:33.741788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-26 23:04:33.741814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-26 23:04:33.741979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-26 23:04:33.742005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-26 23:04:33.742142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-26 23:04:33.742168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-26 23:04:33.742314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-26 23:04:33.742339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-26 23:04:33.742534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-26 23:04:33.742559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-26 23:04:33.742724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-26 23:04:33.742750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-26 23:04:33.742922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-26 23:04:33.742948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-26 23:04:33.743093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-26 23:04:33.743119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-26 23:04:33.743290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-26 23:04:33.743316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-26 23:04:33.743497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-26 23:04:33.743528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-26 23:04:33.743699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-26 23:04:33.743725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-26 23:04:33.743921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-26 23:04:33.743950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-26 23:04:33.744126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-26 23:04:33.744151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-26 23:04:33.744298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-26 23:04:33.744323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-26 23:04:33.744518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-26 23:04:33.744543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-26 23:04:33.744683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-26 23:04:33.744708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-26 23:04:33.744880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-26 23:04:33.744905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-26 23:04:33.745078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-26 23:04:33.745104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-26 23:04:33.745268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-26 23:04:33.745293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-26 23:04:33.745470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-26 23:04:33.745495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-26 23:04:33.745636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-26 23:04:33.745662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-26 23:04:33.745797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-26 23:04:33.745821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-26 23:04:33.745965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-26 23:04:33.745990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-26 23:04:33.746157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-26 23:04:33.746184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-26 23:04:33.746351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-26 23:04:33.746377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-26 23:04:33.746547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-26 23:04:33.746573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-26 23:04:33.746745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-26 23:04:33.746772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-26 23:04:33.746917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-26 23:04:33.746942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-26 23:04:33.747081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-26 23:04:33.747107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-26 23:04:33.747273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-26 23:04:33.747298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-26 23:04:33.747459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-26 23:04:33.747485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.747632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-26 23:04:33.747659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.747853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-26 23:04:33.747878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.748050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-26 23:04:33.748083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.748235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-26 23:04:33.748261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.748400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-26 23:04:33.748425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.748571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-26 23:04:33.748596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.748762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-26 23:04:33.748787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.748924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-26 23:04:33.748953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.749099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-26 23:04:33.749125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.749290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-26 23:04:33.749315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.749497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-26 23:04:33.749523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.749658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-26 23:04:33.749683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.749845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-26 23:04:33.749870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.750034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-26 23:04:33.750066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.750239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-26 23:04:33.750265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.750404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-26 23:04:33.750429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.750634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-26 23:04:33.750659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.750830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-26 23:04:33.750855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.751025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-26 23:04:33.751050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.751239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-26 23:04:33.751266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.751404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-26 23:04:33.751429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.751634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-26 23:04:33.751660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.751810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-26 23:04:33.751834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.751975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-26 23:04:33.752000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.752145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-26 23:04:33.752171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.752319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-26 23:04:33.752344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.752543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-26 23:04:33.752569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.752746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-26 23:04:33.752771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.752943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-26 23:04:33.752968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.753137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-26 23:04:33.753163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.753331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-26 23:04:33.753356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.753525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-26 23:04:33.753550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.753698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-26 23:04:33.753723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.753914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-26 23:04:33.753939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.754102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-26 23:04:33.754128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.754283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-26 23:04:33.754308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.754488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-26 23:04:33.754513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-26 23:04:33.754679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-26 23:04:33.754704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-26 23:04:33.754899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-26 23:04:33.754924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-26 23:04:33.755090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-26 23:04:33.755116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-26 23:04:33.755280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-26 23:04:33.755305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-26 23:04:33.755465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-26 23:04:33.755490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-26 23:04:33.755651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-26 23:04:33.755676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-26 23:04:33.755851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-26 23:04:33.755876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-26 23:04:33.756049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-26 23:04:33.756083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-26 23:04:33.756252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-26 23:04:33.756277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-26 23:04:33.756445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-26 23:04:33.756471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-26 23:04:33.756644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-26 23:04:33.756669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-26 23:04:33.756844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-26 23:04:33.756873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-26 23:04:33.757040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-26 23:04:33.757074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-26 23:04:33.757225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-26 23:04:33.757250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-26 23:04:33.757414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-26 23:04:33.757446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-26 23:04:33.757597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-26 23:04:33.757623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-26 23:04:33.757818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-26 23:04:33.757846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-26 23:04:33.758010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-26 23:04:33.758036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-26 23:04:33.758193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-26 23:04:33.758219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-26 23:04:33.758358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-26 23:04:33.758383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-26 23:04:33.758561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-26 23:04:33.758586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-26 23:04:33.758758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-26 23:04:33.758784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-26 23:04:33.758979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-26 23:04:33.759004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-26 23:04:33.759173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-26 23:04:33.759199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-26 23:04:33.759395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-26 23:04:33.759420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-26 23:04:33.759569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-26 23:04:33.759594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-26 23:04:33.759732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-26 23:04:33.759757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-26 23:04:33.759894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-26 23:04:33.759920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-26 23:04:33.760089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-26 23:04:33.760115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-26 23:04:33.760259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-26 23:04:33.760285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-26 23:04:33.760478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-26 23:04:33.760503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-26 23:04:33.760644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-26 23:04:33.760670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-26 23:04:33.760837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-26 23:04:33.760863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-26 23:04:33.761027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-26 23:04:33.761052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-26 23:04:33.761221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-26 23:04:33.761246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-26 23:04:33.761393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-26 23:04:33.761419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-26 23:04:33.761580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-26 23:04:33.761605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-26 23:04:33.761772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.761798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-26 23:04:33.761972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.762001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-26 23:04:33.762154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.762180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-26 23:04:33.762362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.762388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-26 23:04:33.762562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.762587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-26 23:04:33.762723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.762749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-26 23:04:33.762923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.762948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-26 23:04:33.763140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.763166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-26 23:04:33.763335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.763361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-26 23:04:33.763530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.763556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-26 23:04:33.763729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.763754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-26 23:04:33.763924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.763949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-26 23:04:33.764112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.764138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-26 23:04:33.764275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.764300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-26 23:04:33.764474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.764499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-26 23:04:33.764650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.764675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-26 23:04:33.764818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.764843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-26 23:04:33.765014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.765041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-26 23:04:33.765184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.765209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-26 23:04:33.765374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.765399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-26 23:04:33.765543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.765569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-26 23:04:33.765736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.765761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-26 23:04:33.765913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.765938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-26 23:04:33.766072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.766098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-26 23:04:33.766268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.766293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-26 23:04:33.766457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.766483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-26 23:04:33.766644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.766669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-26 23:04:33.766833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.766858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-26 23:04:33.767019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.767048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-26 23:04:33.767210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.767236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-26 23:04:33.767406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.767431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-26 23:04:33.767599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.767624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-26 23:04:33.767771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.767796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-26 23:04:33.767939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.767964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-26 23:04:33.768139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.768165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-26 23:04:33.768311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.768336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-26 23:04:33.768505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.768531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-26 23:04:33.768727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-26 23:04:33.768752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.768923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-26 23:04:33.768948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.769115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-26 23:04:33.769141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.769311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-26 23:04:33.769337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.769513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-26 23:04:33.769539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.769712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-26 23:04:33.769737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.769905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-26 23:04:33.769930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.770068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-26 23:04:33.770094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.770263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-26 23:04:33.770289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.770458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-26 23:04:33.770485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.770660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-26 23:04:33.770686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.770861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-26 23:04:33.770886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.771047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-26 23:04:33.771098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.771303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-26 23:04:33.771328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.771472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-26 23:04:33.771497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.771693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-26 23:04:33.771718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.771913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-26 23:04:33.771938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.772082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-26 23:04:33.772116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.772287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-26 23:04:33.772313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.772465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-26 23:04:33.772491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.772662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-26 23:04:33.772688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.772855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-26 23:04:33.772880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.773056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-26 23:04:33.773088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.773281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-26 23:04:33.773306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.773478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-26 23:04:33.773502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.773672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-26 23:04:33.773697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.773879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-26 23:04:33.773905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.774042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-26 23:04:33.774073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.774222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-26 23:04:33.774249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.774424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-26 23:04:33.774450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.774613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-26 23:04:33.774638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.774781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-26 23:04:33.774807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.774944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-26 23:04:33.774970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.775138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-26 23:04:33.775165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.775362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-26 23:04:33.775387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.775518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-26 23:04:33.775543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.775680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-26 23:04:33.775707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.775883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-26 23:04:33.775909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-26 23:04:33.776045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-26 23:04:33.776079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-26 23:04:33.776245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-26 23:04:33.776270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-26 23:04:33.776414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-26 23:04:33.776441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-26 23:04:33.776607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-26 23:04:33.776632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-26 23:04:33.776802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-26 23:04:33.776827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-26 23:04:33.776998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-26 23:04:33.777023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-26 23:04:33.777200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-26 23:04:33.777225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-26 23:04:33.777404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-26 23:04:33.777430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-26 23:04:33.777631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-26 23:04:33.777657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-26 23:04:33.777825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-26 23:04:33.777851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-26 23:04:33.778028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-26 23:04:33.778053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-26 23:04:33.778233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-26 23:04:33.778259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-26 23:04:33.778452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-26 23:04:33.778477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-26 23:04:33.778653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-26 23:04:33.778678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-26 23:04:33.778808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-26 23:04:33.778833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-26 23:04:33.779002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-26 23:04:33.779027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-26 23:04:33.779201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-26 23:04:33.779227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-26 23:04:33.779425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-26 23:04:33.779450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-26 23:04:33.779589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-26 23:04:33.779614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-26 23:04:33.779813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-26 23:04:33.779838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-26 23:04:33.780011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-26 23:04:33.780038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-26 23:04:33.780190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-26 23:04:33.780220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-26 23:04:33.780414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-26 23:04:33.780440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-26 23:04:33.780612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-26 23:04:33.780637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-26 23:04:33.780807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-26 23:04:33.780832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-26 23:04:33.781002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-26 23:04:33.781027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-26 23:04:33.781204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-26 23:04:33.781230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-26 23:04:33.781366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-26 23:04:33.781391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-26 23:04:33.781570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-26 23:04:33.781595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-26 23:04:33.781773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-26 23:04:33.781798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-26 23:04:33.781939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-26 23:04:33.781964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-26 23:04:33.782138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-26 23:04:33.782164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-26 23:04:33.782305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.782330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.782525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.782550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.782723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.782748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.782924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.782949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.783149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.783175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.783370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.783395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.783558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.783583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.783719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.783744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.783913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.783938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.784142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.784167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.784319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.784344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.784483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.784507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.784677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.784702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.784867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.784892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.785029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.785054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.785203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.785229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.785398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.785427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.785604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.785629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.785797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.785822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.785997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.786021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.786236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.786262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.786436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.786461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.786660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.786685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.786853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.786878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.787049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.787100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.787246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.787272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.787441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.787466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.787598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.787623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.787812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.787837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.788008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.788034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.788244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.788269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.788441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.788467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.788638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.788663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.788828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.788853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.789046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.789079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.789234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.789259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.789453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.789478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.789639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-26 23:04:33.789664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-26 23:04:33.789807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.789832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-26 23:04:33.790001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.790026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-26 23:04:33.790245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.790271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-26 23:04:33.790451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.790475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-26 23:04:33.790654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.790679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-26 23:04:33.790851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.790876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-26 23:04:33.791022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.791047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-26 23:04:33.791209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.791234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-26 23:04:33.791411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.791436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-26 23:04:33.791600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.791626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-26 23:04:33.791791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.791816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-26 23:04:33.791956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.791981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-26 23:04:33.792115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.792141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-26 23:04:33.792291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.792316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-26 23:04:33.792481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.792506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-26 23:04:33.792697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.792723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-26 23:04:33.792865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.792890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-26 23:04:33.793068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.793094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-26 23:04:33.793265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.793291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-26 23:04:33.793467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.793492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-26 23:04:33.793663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.793688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-26 23:04:33.793841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.793866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-26 23:04:33.794034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.794064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-26 23:04:33.794240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.794265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-26 23:04:33.794430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.794456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-26 23:04:33.794595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.794620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-26 23:04:33.794798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.794824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-26 23:04:33.794969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.794995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-26 23:04:33.795166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.795192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-26 23:04:33.795367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.795393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-26 23:04:33.795589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.795614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-26 23:04:33.795783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.795810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-26 23:04:33.795959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.795984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-26 23:04:33.796147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.796173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-26 23:04:33.796312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.796337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-26 23:04:33.796509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.796534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-26 23:04:33.796681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.796707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-26 23:04:33.796869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-26 23:04:33.796894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.797070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-26 23:04:33.797096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.797269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-26 23:04:33.797295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.797476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-26 23:04:33.797501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.797643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-26 23:04:33.797669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.797840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-26 23:04:33.797866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.798037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-26 23:04:33.798068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.798227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-26 23:04:33.798252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.798397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-26 23:04:33.798422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.798594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-26 23:04:33.798623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.798788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-26 23:04:33.798814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.798960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-26 23:04:33.798985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.799162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-26 23:04:33.799190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.799361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-26 23:04:33.799386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.799554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-26 23:04:33.799579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.799749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-26 23:04:33.799774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.799905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-26 23:04:33.799930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.800080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-26 23:04:33.800106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.800269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-26 23:04:33.800294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.800458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-26 23:04:33.800484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.800623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-26 23:04:33.800649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.800816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-26 23:04:33.800841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.800980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-26 23:04:33.801005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.801202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-26 23:04:33.801228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.801370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-26 23:04:33.801395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.801561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-26 23:04:33.801586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.801759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-26 23:04:33.801785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.801951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-26 23:04:33.801976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.802154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-26 23:04:33.802180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.802353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-26 23:04:33.802378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.802547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-26 23:04:33.802573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.802744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-26 23:04:33.802769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.802942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-26 23:04:33.802967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.803132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-26 23:04:33.803158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.803334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-26 23:04:33.803359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.803509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-26 23:04:33.803533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.803683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-26 23:04:33.803714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.803890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-26 23:04:33.803916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-26 23:04:33.804087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.804116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-26 23:04:33.804294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.804320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-26 23:04:33.804511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.804536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-26 23:04:33.804678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.804703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-26 23:04:33.804879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.804904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-26 23:04:33.805067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.805093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-26 23:04:33.805271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.805297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-26 23:04:33.805465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.805491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-26 23:04:33.805626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.805652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-26 23:04:33.805822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.805847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-26 23:04:33.806020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.806046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-26 23:04:33.806223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.806248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-26 23:04:33.806395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.806420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-26 23:04:33.806587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.806613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-26 23:04:33.806785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.806810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-26 23:04:33.806983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.807008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-26 23:04:33.807188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.807215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-26 23:04:33.807388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.807413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-26 23:04:33.807587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.807612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-26 23:04:33.807757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.807782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-26 23:04:33.807923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.807949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-26 23:04:33.808120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.808146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-26 23:04:33.808315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.808340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-26 23:04:33.808507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.808532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-26 23:04:33.808705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.808731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-26 23:04:33.808877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.808908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-26 23:04:33.809086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.809116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-26 23:04:33.809286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.809312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-26 23:04:33.809479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.809504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-26 23:04:33.809642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.809667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-26 23:04:33.809806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.809831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-26 23:04:33.810003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.810029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-26 23:04:33.810170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.810196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-26 23:04:33.810340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.810365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-26 23:04:33.810541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.810566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-26 23:04:33.810708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.810733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-26 23:04:33.810880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.810905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-26 23:04:33.811081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-26 23:04:33.811117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-26 23:04:33.811308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-26 23:04:33.811333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-26 23:04:33.811470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-26 23:04:33.811495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-26 23:04:33.811662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-26 23:04:33.811687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-26 23:04:33.811833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-26 23:04:33.811858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-26 23:04:33.812006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-26 23:04:33.812031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-26 23:04:33.812215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-26 23:04:33.812240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-26 23:04:33.812406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-26 23:04:33.812431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-26 23:04:33.812574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-26 23:04:33.812599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-26 23:04:33.812770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-26 23:04:33.812797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-26 23:04:33.812965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-26 23:04:33.812990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-26 23:04:33.813189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-26 23:04:33.813215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-26 23:04:33.813365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-26 23:04:33.813391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-26 23:04:33.813534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-26 23:04:33.813560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-26 23:04:33.813755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-26 23:04:33.813781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-26 23:04:33.813952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-26 23:04:33.813977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-26 23:04:33.814119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-26 23:04:33.814145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-26 23:04:33.814299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-26 23:04:33.814324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-26 23:04:33.814494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-26 23:04:33.814519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-26 23:04:33.814685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-26 23:04:33.814711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-26 23:04:33.814854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-26 23:04:33.814879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-26 23:04:33.815047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-26 23:04:33.815079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-26 23:04:33.815260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-26 23:04:33.815285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-26 23:04:33.815423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-26 23:04:33.815448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-26 23:04:33.815608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-26 23:04:33.815633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-26 23:04:33.815777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-26 23:04:33.815803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-26 23:04:33.815979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-26 23:04:33.816004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-26 23:04:33.816182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-26 23:04:33.816208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-26 23:04:33.816347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-26 23:04:33.816372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-26 23:04:33.816552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-26 23:04:33.816577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-26 23:04:33.816719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-26 23:04:33.816744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-26 23:04:33.816916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-26 23:04:33.816941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-26 23:04:33.817142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-26 23:04:33.817168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-26 23:04:33.817315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.817340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.817500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.817525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.817690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.817715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.817881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.817906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.818104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.818129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.818270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.818295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.818432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.818457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.818602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.818627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.818775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.818801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.818945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.818971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.819169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.819194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.819338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.819364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.819514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.819539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.819713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.819738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.819911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.819936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.820076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.820102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.820295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.820320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.820463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.820488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.820672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.820697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.820861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.820886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.821082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.821108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.821281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.821307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.821499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.821524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.821721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.821750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.821927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.821952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.822120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.822145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.822290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.822315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.822485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.822510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.822707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.822732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.822928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.822954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.823148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.823174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.823315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.823340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.823501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.823525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.823694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.823719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.823854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.823879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.824028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.824054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.824238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.824264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.824439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-26 23:04:33.824465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-26 23:04:33.824636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.824661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-26 23:04:33.824799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.824825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-26 23:04:33.824959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.824984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-26 23:04:33.825135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.825160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-26 23:04:33.825322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.825347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-26 23:04:33.825521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.825547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-26 23:04:33.825716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.825742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-26 23:04:33.825894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.825919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-26 23:04:33.826080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.826106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-26 23:04:33.826281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.826306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-26 23:04:33.826449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.826474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-26 23:04:33.826648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.826672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-26 23:04:33.826878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.826907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-26 23:04:33.827073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.827100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-26 23:04:33.827247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.827273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-26 23:04:33.827467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.827492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-26 23:04:33.827692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.827717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-26 23:04:33.827911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.827936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-26 23:04:33.828100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.828126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-26 23:04:33.828264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.828289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-26 23:04:33.828433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.828458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-26 23:04:33.828601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.828626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-26 23:04:33.828794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.828819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-26 23:04:33.828985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.829010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-26 23:04:33.829181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.829207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-26 23:04:33.829355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.829380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-26 23:04:33.829533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.829557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-26 23:04:33.829755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.829780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-26 23:04:33.829950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.829976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-26 23:04:33.830142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.830167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-26 23:04:33.830306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.830331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-26 23:04:33.830494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.830519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-26 23:04:33.830688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.830713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-26 23:04:33.830890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.830915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-26 23:04:33.831087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.831113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-26 23:04:33.831288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.831312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-26 23:04:33.831488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.831513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-26 23:04:33.831649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-26 23:04:33.831673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.831817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.831842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.832023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.832048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.832229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.832254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.832449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.832474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.832643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.832668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.832810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.832835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.833004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.833031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.833211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.833237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.833411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.833436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.833604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.833629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.833799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.833824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.833965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.833990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.834164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.834190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.834354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.834379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.834547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.834572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.834715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.834741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.834911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.834936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.835102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.835128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.835300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.835325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.835499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.835524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.835693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.835718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.835890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.835916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.836086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.836111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.836262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.836287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.836450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.836475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.836643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.836668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.836838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.836863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.837064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.837090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.837229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.837255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.837426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.837451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.837603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.837629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.837767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.837793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.837971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.837997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.838163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.838189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.838385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.838410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.838552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.838577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.838747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.838772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-26 23:04:33.838936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-26 23:04:33.838961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-26 23:04:33.839131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-26 23:04:33.839157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-26 23:04:33.839325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-26 23:04:33.839350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-26 23:04:33.839550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-26 23:04:33.839575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-26 23:04:33.839717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-26 23:04:33.839741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-26 23:04:33.839936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-26 23:04:33.839965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-26 23:04:33.840164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-26 23:04:33.840190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-26 23:04:33.840365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-26 23:04:33.840390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-26 23:04:33.840557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-26 23:04:33.840582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-26 23:04:33.840726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-26 23:04:33.840751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-26 23:04:33.840918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-26 23:04:33.840943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-26 23:04:33.841096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-26 23:04:33.841126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-26 23:04:33.841278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-26 23:04:33.841303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-26 23:04:33.841502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-26 23:04:33.841527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-26 23:04:33.841676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-26 23:04:33.841701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-26 23:04:33.841837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-26 23:04:33.841862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-26 23:04:33.842057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-26 23:04:33.842088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-26 23:04:33.842284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-26 23:04:33.842310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-26 23:04:33.842453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-26 23:04:33.842479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-26 23:04:33.842658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-26 23:04:33.842683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-26 23:04:33.842854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-26 23:04:33.842879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-26 23:04:33.843055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-26 23:04:33.843098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-26 23:04:33.843263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-26 23:04:33.843288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-26 23:04:33.843450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-26 23:04:33.843475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-26 23:04:33.843641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-26 23:04:33.843666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-26 23:04:33.843835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-26 23:04:33.843860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-26 23:04:33.844054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-26 23:04:33.844087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-26 23:04:33.844256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-26 23:04:33.844282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-26 23:04:33.844450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-26 23:04:33.844476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-26 23:04:33.844619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-26 23:04:33.844644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-26 23:04:33.844789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-26 23:04:33.844815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-26 23:04:33.844953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-26 23:04:33.844978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-26 23:04:33.845148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-26 23:04:33.845178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-26 23:04:33.845356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-26 23:04:33.845381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-26 23:04:33.845586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-26 23:04:33.845611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-26 23:04:33.845783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-26 23:04:33.845809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-26 23:04:33.845974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-26 23:04:33.845998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-26 23:04:33.846170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-26 23:04:33.846196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-26 23:04:33.846364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-26 23:04:33.846390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-26 23:04:33.846564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-26 23:04:33.846589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-26 23:04:33.846726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-26 23:04:33.846751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-26 23:04:33.846944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-26 23:04:33.846969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-26 23:04:33.847151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-26 23:04:33.847177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-26 23:04:33.847321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-26 23:04:33.847347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-26 23:04:33.847492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-26 23:04:33.847517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-26 23:04:33.847688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-26 23:04:33.847713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-26 23:04:33.847898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-26 23:04:33.847923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-26 23:04:33.848090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-26 23:04:33.848115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-26 23:04:33.848260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-26 23:04:33.848286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-26 23:04:33.848487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-26 23:04:33.848512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-26 23:04:33.848709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-26 23:04:33.848734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-26 23:04:33.848931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-26 23:04:33.848956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-26 23:04:33.849149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-26 23:04:33.849175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-26 23:04:33.849342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-26 23:04:33.849367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-26 23:04:33.849532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-26 23:04:33.849557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-26 23:04:33.849701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-26 23:04:33.849726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-26 23:04:33.849877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-26 23:04:33.849904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-26 23:04:33.850069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-26 23:04:33.850095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-26 23:04:33.850269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-26 23:04:33.850295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-26 23:04:33.850436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-26 23:04:33.850465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-26 23:04:33.850614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-26 23:04:33.850639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-26 23:04:33.850811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-26 23:04:33.850836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-26 23:04:33.851005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-26 23:04:33.851031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-26 23:04:33.851211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-26 23:04:33.851237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-26 23:04:33.851382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-26 23:04:33.851408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-26 23:04:33.851590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-26 23:04:33.851614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-26 23:04:33.851808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-26 23:04:33.851834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-26 23:04:33.852000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-26 23:04:33.852025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-26 23:04:33.852176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-26 23:04:33.852202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-26 23:04:33.852336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-26 23:04:33.852360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-26 23:04:33.852524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.852549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-26 23:04:33.852747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.852772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-26 23:04:33.852937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.852962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-26 23:04:33.853205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.853247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-26 23:04:33.853431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.853459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-26 23:04:33.853634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.853660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-26 23:04:33.853857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.853884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-26 23:04:33.854080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.854107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-26 23:04:33.854359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.854385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-26 23:04:33.854557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.854582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-26 23:04:33.854758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.854785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-26 23:04:33.854988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.855014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-26 23:04:33.855194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.855220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-26 23:04:33.855403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.855429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-26 23:04:33.855572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.855598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-26 23:04:33.855771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.855796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-26 23:04:33.855940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.855971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-26 23:04:33.856128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.856154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-26 23:04:33.856330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.856355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-26 23:04:33.856559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.856585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-26 23:04:33.856753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.856778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-26 23:04:33.857027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.857053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-26 23:04:33.857254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.857280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-26 23:04:33.857449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.857474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-26 23:04:33.857645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.857670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-26 23:04:33.857811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.857836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-26 23:04:33.857979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.858006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-26 23:04:33.858217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.858244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-26 23:04:33.858394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.858419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-26 23:04:33.858599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.858625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-26 23:04:33.858808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.858833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-26 23:04:33.859033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.859075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-26 23:04:33.859254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.859279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-26 23:04:33.859424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.859449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-26 23:04:33.859639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.859664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-26 23:04:33.859805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.859830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-26 23:04:33.859978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.860003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-26 23:04:33.860165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-26 23:04:33.860191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-26 23:04:33.860362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-26 23:04:33.860388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-26 23:04:33.860559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-26 23:04:33.860584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-26 23:04:33.860788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-26 23:04:33.860813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-26 23:04:33.860983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-26 23:04:33.861008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-26 23:04:33.861172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-26 23:04:33.861198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-26 23:04:33.861403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-26 23:04:33.861432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-26 23:04:33.861609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-26 23:04:33.861634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-26 23:04:33.861781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-26 23:04:33.861816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-26 23:04:33.861990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-26 23:04:33.862015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-26 23:04:33.862189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-26 23:04:33.862215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-26 23:04:33.862385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-26 23:04:33.862410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-26 23:04:33.862552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-26 23:04:33.862577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-26 23:04:33.862751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-26 23:04:33.862776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-26 23:04:33.862918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-26 23:04:33.862943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-26 23:04:33.863107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-26 23:04:33.863133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-26 23:04:33.863282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-26 23:04:33.863307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-26 23:04:33.863480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-26 23:04:33.863505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-26 23:04:33.863667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-26 23:04:33.863692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-26 23:04:33.863838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-26 23:04:33.863863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-26 23:04:33.864081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-26 23:04:33.864108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-26 23:04:33.864354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-26 23:04:33.864380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-26 23:04:33.864628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-26 23:04:33.864653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-26 23:04:33.864806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-26 23:04:33.864831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-26 23:04:33.864972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-26 23:04:33.864997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-26 23:04:33.865142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-26 23:04:33.865167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-26 23:04:33.865343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-26 23:04:33.865368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-26 23:04:33.865569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-26 23:04:33.865594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-26 23:04:33.865761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-26 23:04:33.865786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-26 23:04:33.866033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-26 23:04:33.866062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-26 23:04:33.866230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-26 23:04:33.866255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-26 23:04:33.866400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-26 23:04:33.866426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-26 23:04:33.866567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-26 23:04:33.866593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-26 23:04:33.866734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-26 23:04:33.866763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-26 23:04:33.866936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-26 23:04:33.866961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-26 23:04:33.867125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-26 23:04:33.867150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-26 23:04:33.867286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-26 23:04:33.867311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-26 23:04:33.867479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-26 23:04:33.867504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-26 23:04:33.867669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-26 23:04:33.867695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-26 23:04:33.867862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-26 23:04:33.867887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-26 23:04:33.868130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-26 23:04:33.868156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-26 23:04:33.868403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-26 23:04:33.868428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-26 23:04:33.868622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-26 23:04:33.868647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-26 23:04:33.868807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-26 23:04:33.868832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-26 23:04:33.869003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-26 23:04:33.869028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-26 23:04:33.869217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-26 23:04:33.869242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-26 23:04:33.869411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-26 23:04:33.869437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-26 23:04:33.869582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-26 23:04:33.869607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-26 23:04:33.869780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-26 23:04:33.869805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-26 23:04:33.869943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-26 23:04:33.869968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-26 23:04:33.870169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-26 23:04:33.870195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-26 23:04:33.870342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-26 23:04:33.870367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-26 23:04:33.870533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-26 23:04:33.870558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-26 23:04:33.870702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-26 23:04:33.870727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-26 23:04:33.870896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-26 23:04:33.870921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-26 23:04:33.871087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-26 23:04:33.871113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-26 23:04:33.871259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-26 23:04:33.871284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-26 23:04:33.871452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-26 23:04:33.871477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-26 23:04:33.871646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-26 23:04:33.871671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-26 23:04:33.871812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-26 23:04:33.871837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-26 23:04:33.872003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-26 23:04:33.872028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-26 23:04:33.872208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-26 23:04:33.872234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-26 23:04:33.872402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-26 23:04:33.872427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-26 23:04:33.872620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-26 23:04:33.872645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-26 23:04:33.872792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-26 23:04:33.872817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-26 23:04:33.872988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-26 23:04:33.873013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-26 23:04:33.873199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-26 23:04:33.873225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-26 23:04:33.873430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-26 23:04:33.873456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-26 23:04:33.873622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-26 23:04:33.873648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-26 23:04:33.873793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-26 23:04:33.873820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-26 23:04:33.874019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-26 23:04:33.874044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-26 23:04:33.874198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-26 23:04:33.874223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-26 23:04:33.874371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-26 23:04:33.874396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-26 23:04:33.874564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-26 23:04:33.874589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-26 23:04:33.874757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-26 23:04:33.874783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-26 23:04:33.874957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-26 23:04:33.874982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-26 23:04:33.875152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-26 23:04:33.875178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-26 23:04:33.875347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-26 23:04:33.875372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-26 23:04:33.875539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-26 23:04:33.875564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-26 23:04:33.875732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-26 23:04:33.875757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-26 23:04:33.875900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-26 23:04:33.875925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-26 23:04:33.876121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-26 23:04:33.876147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-26 23:04:33.876318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-26 23:04:33.876343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-26 23:04:33.876489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-26 23:04:33.876514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-26 23:04:33.876677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-26 23:04:33.876702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-26 23:04:33.876842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-26 23:04:33.876867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-26 23:04:33.877040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-26 23:04:33.877072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-26 23:04:33.877273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-26 23:04:33.877298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-26 23:04:33.877449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-26 23:04:33.877474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-26 23:04:33.877670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-26 23:04:33.877695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-26 23:04:33.877870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-26 23:04:33.877895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-26 23:04:33.878043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-26 23:04:33.878075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-26 23:04:33.878248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-26 23:04:33.878273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-26 23:04:33.878411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-26 23:04:33.878436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-26 23:04:33.878588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-26 23:04:33.878613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-26 23:04:33.878784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-26 23:04:33.878809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-26 23:04:33.878961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-26 23:04:33.878986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-26 23:04:33.879132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-26 23:04:33.879158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-26 23:04:33.879314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-26 23:04:33.879339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-26 23:04:33.879487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-26 23:04:33.879513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-26 23:04:33.879663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-26 23:04:33.879688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-26 23:04:33.879854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-26 23:04:33.879883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-26 23:04:33.880049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-26 23:04:33.880083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-26 23:04:33.880257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-26 23:04:33.880283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-26 23:04:33.880428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-26 23:04:33.880453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-26 23:04:33.880592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-26 23:04:33.880617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-26 23:04:33.880783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-26 23:04:33.880808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-26 23:04:33.880977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-26 23:04:33.881002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-26 23:04:33.881149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-26 23:04:33.881175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.881371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.881397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.881569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.881594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.881762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.881787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.881958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.881983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.882156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.882181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.882344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.882369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.882543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.882569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.882717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.882742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.882888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.882913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.883071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.883097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.883241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.883267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.883438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.883464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.883638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.883663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.883837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.883862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.884030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.884055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.884200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.884225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.884421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.884447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.884628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.884653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.884847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.884872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.885015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.885044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.885221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.885247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.885447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.885472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.885622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.885650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.885819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.885844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.886007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.886033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.886204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.886230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.886364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.886389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.886552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.886577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.886716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.886741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.886935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.886960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.887157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.887183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.887358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.887383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.887518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.887543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.887747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.887772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.887942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.887967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.888135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.888161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.888323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.888349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.888518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.888543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.888685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.888711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.888877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.888903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-26 23:04:33.889051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-26 23:04:33.889083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.889249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.889275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.889448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.889473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.889648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.889673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.889819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.889844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.890007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.890032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.890216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.890246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.890444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.890470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.890644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.890670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.890866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.890892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.891054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.891101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.891253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.891279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.891455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.891480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.891623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.891649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.891819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.891844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.891983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.892008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.892207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.892233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.892373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.892398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.892570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.892595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.892771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.892797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.892973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.892998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.893160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.893186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.893330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.893355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.893521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.893546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.893716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.893741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.893878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.893903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.894080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.894106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.894245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.894270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.894452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.894477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.894643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.894668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.894810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.894835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.894985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.895010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.895177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.895203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.895333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.895358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.895506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.895532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.895695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.895721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.895914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.895939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.896143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.896169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.896335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.896360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.896505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.896531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.896728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.896753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-26 23:04:33.896923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-26 23:04:33.896948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.897093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.897119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.897265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.897290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.897458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.897483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.897629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.897654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.897852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.897877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.898077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.898106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.898274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.898300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.898473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.898498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.898695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.898721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.898885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.898910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.899081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.899107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.899251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.899277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.899451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.899477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.899624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.899649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.899815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.899841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.900032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.900065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.900233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.900258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.900430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.900455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.900618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.900644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.900822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.900848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.901001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.901026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.901229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.901255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.901421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.901446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.901583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.901609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.901806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.901831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.901965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.901991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.902158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.902184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.902358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.902383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.902578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.902604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.902776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.902801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.902972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.902999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.903169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.903196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.903368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.903397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.903563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.903589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.903754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.903780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.903952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.903977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.904117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.904142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.904314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.904339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.904513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.904538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-26 23:04:33.904705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-26 23:04:33.904731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-26 23:04:33.904873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-26 23:04:33.904899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-26 23:04:33.905071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-26 23:04:33.905097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-26 23:04:33.905269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-26 23:04:33.905294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-26 23:04:33.905485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-26 23:04:33.905511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-26 23:04:33.905679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-26 23:04:33.905704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-26 23:04:33.905899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-26 23:04:33.905924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-26 23:04:33.906071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-26 23:04:33.906097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-26 23:04:33.906261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-26 23:04:33.906286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-26 23:04:33.906462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-26 23:04:33.906487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-26 23:04:33.906623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-26 23:04:33.906648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-26 23:04:33.906822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-26 23:04:33.906847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-26 23:04:33.906993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-26 23:04:33.907019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-26 23:04:33.907167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-26 23:04:33.907193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-26 23:04:33.907364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-26 23:04:33.907389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-26 23:04:33.907556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-26 23:04:33.907582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-26 23:04:33.907746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-26 23:04:33.907772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-26 23:04:33.907936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-26 23:04:33.907962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-26 23:04:33.908139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-26 23:04:33.908164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-26 23:04:33.908338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-26 23:04:33.908364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-26 23:04:33.908506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-26 23:04:33.908535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-26 23:04:33.908706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-26 23:04:33.908731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-26 23:04:33.908897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-26 23:04:33.908923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-26 23:04:33.909068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-26 23:04:33.909094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-26 23:04:33.909268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-26 23:04:33.909294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-26 23:04:33.909459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-26 23:04:33.909485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-26 23:04:33.909678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-26 23:04:33.909703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-26 23:04:33.909847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-26 23:04:33.909872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-26 23:04:33.910042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-26 23:04:33.910075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-26 23:04:33.910253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-26 23:04:33.910279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-26 23:04:33.910445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-26 23:04:33.910471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-26 23:04:33.910668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-26 23:04:33.910694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-26 23:04:33.910838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-26 23:04:33.910864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-26 23:04:33.910997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-26 23:04:33.911022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-26 23:04:33.911207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-26 23:04:33.911233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-26 23:04:33.911375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-26 23:04:33.911400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-26 23:04:33.911594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-26 23:04:33.911619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-26 23:04:33.911817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-26 23:04:33.911842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-26 23:04:33.912010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-26 23:04:33.912036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-26 23:04:33.912180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-26 23:04:33.912206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-26 23:04:33.912377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-26 23:04:33.912402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-26 23:04:33.912549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-26 23:04:33.912574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-26 23:04:33.912716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-26 23:04:33.912742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-26 23:04:33.912940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-26 23:04:33.912965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-26 23:04:33.913142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-26 23:04:33.913168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-26 23:04:33.913340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-26 23:04:33.913367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-26 23:04:33.913547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-26 23:04:33.913574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-26 23:04:33.913743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-26 23:04:33.913768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-26 23:04:33.913972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-26 23:04:33.913997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-26 23:04:33.914152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-26 23:04:33.914178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-26 23:04:33.914360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-26 23:04:33.914386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-26 23:04:33.914559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-26 23:04:33.914585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-26 23:04:33.914753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-26 23:04:33.914778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-26 23:04:33.914930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-26 23:04:33.914955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-26 23:04:33.915164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-26 23:04:33.915190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-26 23:04:33.915333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-26 23:04:33.915358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-26 23:04:33.915504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-26 23:04:33.915530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-26 23:04:33.915696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-26 23:04:33.915722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-26 23:04:33.915892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-26 23:04:33.915917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-26 23:04:33.916082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-26 23:04:33.916108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-26 23:04:33.916247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-26 23:04:33.916272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-26 23:04:33.916444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-26 23:04:33.916470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-26 23:04:33.916614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-26 23:04:33.916639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-26 23:04:33.916812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-26 23:04:33.916837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-26 23:04:33.917006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-26 23:04:33.917032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-26 23:04:33.917219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-26 23:04:33.917245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-26 23:04:33.917390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-26 23:04:33.917416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-26 23:04:33.917584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-26 23:04:33.917609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-26 23:04:33.917755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-26 23:04:33.917780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-26 23:04:33.917922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-26 23:04:33.917948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-26 23:04:33.918122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-26 23:04:33.918148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-26 23:04:33.918287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-26 23:04:33.918312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-26 23:04:33.918480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-26 23:04:33.918505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-26 23:04:33.918646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-26 23:04:33.918672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-26 23:04:33.918800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-26 23:04:33.918825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-26 23:04:33.918971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-26 23:04:33.918997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-26 23:04:33.919196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-26 23:04:33.919223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-26 23:04:33.919395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-26 23:04:33.919420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-26 23:04:33.919563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-26 23:04:33.919588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-26 23:04:33.919763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-26 23:04:33.919788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-26 23:04:33.919965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-26 23:04:33.919990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-26 23:04:33.920159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-26 23:04:33.920185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-26 23:04:33.920331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-26 23:04:33.920356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-26 23:04:33.920525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-26 23:04:33.920550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-26 23:04:33.920693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-26 23:04:33.920718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-26 23:04:33.920887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-26 23:04:33.920912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-26 23:04:33.921081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-26 23:04:33.921107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-26 23:04:33.921274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-26 23:04:33.921299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-26 23:04:33.921464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-26 23:04:33.921493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-26 23:04:33.921628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-26 23:04:33.921654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-26 23:04:33.921825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-26 23:04:33.921850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-26 23:04:33.922020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-26 23:04:33.922047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-26 23:04:33.922199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-26 23:04:33.922224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-26 23:04:33.922366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-26 23:04:33.922393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-26 23:04:33.922565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-26 23:04:33.922590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-26 23:04:33.922770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-26 23:04:33.922795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-26 23:04:33.922962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-26 23:04:33.922987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-26 23:04:33.923154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-26 23:04:33.923180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-26 23:04:33.923346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-26 23:04:33.923371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-26 23:04:33.923510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-26 23:04:33.923536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-26 23:04:33.923697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-26 23:04:33.923722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-26 23:04:33.923890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-26 23:04:33.923915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-26 23:04:33.924115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-26 23:04:33.924141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-26 23:04:33.924339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-26 23:04:33.924364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-26 23:04:33.924506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-26 23:04:33.924531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-26 23:04:33.924724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-26 23:04:33.924749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-26 23:04:33.924916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-26 23:04:33.924941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-26 23:04:33.925124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-26 23:04:33.925150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-26 23:04:33.925320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-26 23:04:33.925346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-26 23:04:33.925539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-26 23:04:33.925564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-26 23:04:33.925739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-26 23:04:33.925764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-26 23:04:33.925935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-26 23:04:33.925960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-26 23:04:33.926133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-26 23:04:33.926159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-26 23:04:33.926330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-26 23:04:33.926355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-26 23:04:33.926499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-26 23:04:33.926524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-26 23:04:33.926689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-26 23:04:33.926719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-26 23:04:33.926893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-26 23:04:33.926918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-26 23:04:33.927065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-26 23:04:33.927091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-26 23:04:33.927267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-26 23:04:33.927292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-26 23:04:33.927431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-26 23:04:33.927456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-26 23:04:33.927616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-26 23:04:33.927642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-26 23:04:33.927813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-26 23:04:33.927839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-26 23:04:33.927974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-26 23:04:33.928000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-26 23:04:33.928164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-26 23:04:33.928190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-26 23:04:33.928359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-26 23:04:33.928385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-26 23:04:33.928560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-26 23:04:33.928587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-26 23:04:33.928780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-26 23:04:33.928822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-26 23:04:33.929001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-26 23:04:33.929028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-26 23:04:33.929206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-26 23:04:33.929233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-26 23:04:33.929440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-26 23:04:33.929466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-26 23:04:33.929601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-26 23:04:33.929627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-26 23:04:33.929802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-26 23:04:33.929827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-26 23:04:33.930023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-26 23:04:33.930048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-26 23:04:33.930212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-26 23:04:33.930239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-26 23:04:33.930438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-26 23:04:33.930464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-26 23:04:33.930666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-26 23:04:33.930691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-26 23:04:33.930868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-26 23:04:33.930893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-26 23:04:33.931073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-26 23:04:33.931099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-26 23:04:33.931244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-26 23:04:33.931270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-26 23:04:33.931446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-26 23:04:33.931471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-26 23:04:33.931668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-26 23:04:33.931693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-26 23:04:33.931847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-26 23:04:33.931873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-26 23:04:33.932043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-26 23:04:33.932081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-26 23:04:33.932264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-26 23:04:33.932290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-26 23:04:33.932466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-26 23:04:33.932493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-26 23:04:33.932662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-26 23:04:33.932688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-26 23:04:33.932844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-26 23:04:33.932870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-26 23:04:33.933049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-26 23:04:33.933084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-26 23:04:33.933233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-26 23:04:33.933259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-26 23:04:33.933433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-26 23:04:33.933459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-26 23:04:33.933628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-26 23:04:33.933653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-26 23:04:33.933828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-26 23:04:33.933854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-26 23:04:33.934022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-26 23:04:33.934048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-26 23:04:33.934241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-26 23:04:33.934267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-26 23:04:33.934442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-26 23:04:33.934467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-26 23:04:33.934603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-26 23:04:33.934629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-26 23:04:33.934805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-26 23:04:33.934831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-26 23:04:33.934974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-26 23:04:33.934999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-26 23:04:33.935146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-26 23:04:33.935173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-26 23:04:33.935344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-26 23:04:33.935370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-26 23:04:33.935567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-26 23:04:33.935593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-26 23:04:33.935763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-26 23:04:33.935788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-26 23:04:33.935958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-26 23:04:33.935984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-26 23:04:33.936132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-26 23:04:33.936160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-26 23:04:33.936332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-26 23:04:33.936359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-26 23:04:33.936504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-26 23:04:33.936530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-26 23:04:33.936706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-26 23:04:33.936733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-26 23:04:33.936910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-26 23:04:33.936936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-26 23:04:33.937109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-26 23:04:33.937136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-26 23:04:33.937354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-26 23:04:33.937380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-26 23:04:33.937576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-26 23:04:33.937603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-26 23:04:33.937772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-26 23:04:33.937797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-26 23:04:33.937994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-26 23:04:33.938019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-26 23:04:33.938176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-26 23:04:33.938202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-26 23:04:33.938343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-26 23:04:33.938368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-26 23:04:33.938571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-26 23:04:33.938597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-26 23:04:33.938762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-26 23:04:33.938787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-26 23:04:33.938961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-26 23:04:33.938987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-26 23:04:33.939125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-26 23:04:33.939151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-26 23:04:33.939297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-26 23:04:33.939324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-26 23:04:33.939496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-26 23:04:33.939522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-26 23:04:33.939679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-26 23:04:33.939706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-26 23:04:33.939879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-26 23:04:33.939910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-26 23:04:33.940097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-26 23:04:33.940123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-26 23:04:33.940318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-26 23:04:33.940344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-26 23:04:33.940515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-26 23:04:33.940540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-26 23:04:33.940687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-26 23:04:33.940714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-26 23:04:33.940890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-26 23:04:33.940916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-26 23:04:33.941081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-26 23:04:33.941107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-26 23:04:33.941276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-26 23:04:33.941302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-26 23:04:33.941474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-26 23:04:33.941500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-26 23:04:33.941646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-26 23:04:33.941672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-26 23:04:33.941844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-26 23:04:33.941870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-26 23:04:33.942037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-26 23:04:33.942068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-26 23:04:33.942247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-26 23:04:33.942272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-26 23:04:33.942418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-26 23:04:33.942445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-26 23:04:33.942626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-26 23:04:33.942652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-26 23:04:33.942805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-26 23:04:33.942830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-26 23:04:33.942963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-26 23:04:33.942989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-26 23:04:33.943167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-26 23:04:33.943194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-26 23:04:33.943392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-26 23:04:33.943418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-26 23:04:33.943627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-26 23:04:33.943653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-26 23:04:33.943792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-26 23:04:33.943818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-26 23:04:33.943990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-26 23:04:33.944016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-26 23:04:33.944223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-26 23:04:33.944249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-26 23:04:33.944451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-26 23:04:33.944476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-26 23:04:33.944620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-26 23:04:33.944646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-26 23:04:33.944815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-26 23:04:33.944841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-26 23:04:33.945011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-26 23:04:33.945037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-26 23:04:33.945198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-26 23:04:33.945225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-26 23:04:33.945367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-26 23:04:33.945393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-26 23:04:33.945535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-26 23:04:33.945562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-26 23:04:33.945737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-26 23:04:33.945763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-26 23:04:33.945909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-26 23:04:33.945935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-26 23:04:33.946138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-26 23:04:33.946164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-26 23:04:33.946337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-26 23:04:33.946362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-26 23:04:33.946506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-26 23:04:33.946532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-26 23:04:33.946699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-26 23:04:33.946725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-26 23:04:33.946870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-26 23:04:33.946897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-26 23:04:33.947098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-26 23:04:33.947125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-26 23:04:33.947323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-26 23:04:33.947350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-26 23:04:33.947516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-26 23:04:33.947542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-26 23:04:33.947716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-26 23:04:33.947746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-26 23:04:33.947950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-26 23:04:33.947976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-26 23:04:33.948153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-26 23:04:33.948179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-26 23:04:33.948347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-26 23:04:33.948373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-26 23:04:33.948573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-26 23:04:33.948598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-26 23:04:33.948775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-26 23:04:33.948801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-26 23:04:33.948995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-26 23:04:33.949020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-26 23:04:33.949229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-26 23:04:33.949255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-26 23:04:33.949425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-26 23:04:33.949452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-26 23:04:33.949650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-26 23:04:33.949676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-26 23:04:33.949877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-26 23:04:33.949903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-26 23:04:33.950049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-26 23:04:33.950082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-26 23:04:33.950226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-26 23:04:33.950251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-26 23:04:33.950421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-26 23:04:33.950447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-26 23:04:33.950625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-26 23:04:33.950651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-26 23:04:33.950820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-26 23:04:33.950846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-26 23:04:33.951018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-26 23:04:33.951044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-26 23:04:33.951226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-26 23:04:33.951252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-26 23:04:33.951448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-26 23:04:33.951474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-26 23:04:33.951648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-26 23:04:33.951675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-26 23:04:33.951842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-26 23:04:33.951868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-26 23:04:33.952016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-26 23:04:33.952042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-26 23:04:33.952198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-26 23:04:33.952224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-26 23:04:33.952391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-26 23:04:33.952417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-26 23:04:33.952561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-26 23:04:33.952588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-26 23:04:33.952786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-26 23:04:33.952812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-26 23:04:33.952985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-26 23:04:33.953012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-26 23:04:33.953183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-26 23:04:33.953210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-26 23:04:33.953406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-26 23:04:33.953432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-26 23:04:33.953607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-26 23:04:33.953633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-26 23:04:33.953826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-26 23:04:33.953853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-26 23:04:33.954023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-26 23:04:33.954048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-26 23:04:33.954254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-26 23:04:33.954280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-26 23:04:33.954473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-26 23:04:33.954499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-26 23:04:33.954675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-26 23:04:33.954701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-26 23:04:33.954872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-26 23:04:33.954899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-26 23:04:33.955071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-26 23:04:33.955098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-26 23:04:33.955272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-26 23:04:33.955298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-26 23:04:33.955470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-26 23:04:33.955496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-26 23:04:33.955643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-26 23:04:33.955668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-26 23:04:33.955836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-26 23:04:33.955866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-26 23:04:33.956038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-26 23:04:33.956069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-26 23:04:33.956238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-26 23:04:33.956264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-26 23:04:33.956438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-26 23:04:33.956464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-26 23:04:33.956662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-26 23:04:33.956688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-26 23:04:33.956870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-26 23:04:33.956896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-26 23:04:33.957083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-26 23:04:33.957110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-26 23:04:33.957262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-26 23:04:33.957288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-26 23:04:33.957426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-26 23:04:33.957452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-26 23:04:33.957650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-26 23:04:33.957675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-26 23:04:33.957827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-26 23:04:33.957853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-26 23:04:33.957996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-26 23:04:33.958022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-26 23:04:33.958167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-26 23:04:33.958194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-26 23:04:33.958390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-26 23:04:33.958416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-26 23:04:33.958590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-26 23:04:33.958616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-26 23:04:33.958780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-26 23:04:33.958806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-26 23:04:33.959001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-26 23:04:33.959027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-26 23:04:33.959205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-26 23:04:33.959231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-26 23:04:33.959399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-26 23:04:33.959425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-26 23:04:33.959598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-26 23:04:33.959623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-26 23:04:33.959797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-26 23:04:33.959823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-26 23:04:33.959963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-26 23:04:33.959988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-26 23:04:33.960187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-26 23:04:33.960213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-26 23:04:33.960351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-26 23:04:33.960377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-26 23:04:33.960515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-26 23:04:33.960540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-26 23:04:33.960704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-26 23:04:33.960730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-26 23:04:33.960905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-26 23:04:33.960931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-26 23:04:33.961105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-26 23:04:33.961131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-26 23:04:33.961304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-26 23:04:33.961329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-26 23:04:33.961528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-26 23:04:33.961554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-26 23:04:33.961724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-26 23:04:33.961749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-26 23:04:33.961902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-26 23:04:33.961929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-26 23:04:33.962103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-26 23:04:33.962130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-26 23:04:33.962306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-26 23:04:33.962332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-26 23:04:33.962528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-26 23:04:33.962553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-26 23:04:33.962709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-26 23:04:33.962734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-26 23:04:33.962904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-26 23:04:33.962931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-26 23:04:33.963137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-26 23:04:33.963163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-26 23:04:33.963335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-26 23:04:33.963361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-26 23:04:33.963513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-26 23:04:33.963539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-26 23:04:33.963683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-26 23:04:33.963713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-26 23:04:33.963913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-26 23:04:33.963938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-26 23:04:33.964111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-26 23:04:33.964138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-26 23:04:33.964312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-26 23:04:33.964338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-26 23:04:33.964480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-26 23:04:33.964506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-26 23:04:33.964674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-26 23:04:33.964699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-26 23:04:33.964870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-26 23:04:33.964896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-26 23:04:33.965054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-26 23:04:33.965091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-26 23:04:33.965265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-26 23:04:33.965291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-26 23:04:33.965460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-26 23:04:33.965487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-26 23:04:33.965637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-26 23:04:33.965662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-26 23:04:33.965841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-26 23:04:33.965867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-26 23:04:33.966066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-26 23:04:33.966092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-26 23:04:33.966241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-26 23:04:33.966266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-26 23:04:33.966439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-26 23:04:33.966465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-26 23:04:33.966635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-26 23:04:33.966661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-26 23:04:33.966805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-26 23:04:33.966831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-26 23:04:33.966980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-26 23:04:33.967006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-26 23:04:33.967177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-26 23:04:33.967203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-26 23:04:33.967374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-26 23:04:33.967400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-26 23:04:33.967548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-26 23:04:33.967574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-26 23:04:33.967771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-26 23:04:33.967796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-26 23:04:33.967940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-26 23:04:33.967966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-26 23:04:33.968134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-26 23:04:33.968161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-26 23:04:33.968334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-26 23:04:33.968360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-26 23:04:33.968530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-26 23:04:33.968556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.968708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-26 23:04:33.968733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.968908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-26 23:04:33.968934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.969128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-26 23:04:33.969155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.969291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-26 23:04:33.969317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.969487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-26 23:04:33.969512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.969682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-26 23:04:33.969708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.969852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-26 23:04:33.969878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.970020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-26 23:04:33.970046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.970225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-26 23:04:33.970252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.970430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-26 23:04:33.970456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.970625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-26 23:04:33.970651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.970851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-26 23:04:33.970877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.971052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-26 23:04:33.971083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.971231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-26 23:04:33.971257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.971428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-26 23:04:33.971458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.971634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-26 23:04:33.971660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.971828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-26 23:04:33.971854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.972003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-26 23:04:33.972028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.972183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-26 23:04:33.972209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.972353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-26 23:04:33.972380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.972555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-26 23:04:33.972582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.972757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-26 23:04:33.972783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.972955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-26 23:04:33.972982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.973183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-26 23:04:33.973209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.973382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-26 23:04:33.973408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.973547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-26 23:04:33.973572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.973715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-26 23:04:33.973741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.973910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-26 23:04:33.973936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.974117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-26 23:04:33.974143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.974316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-26 23:04:33.974342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.974537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-26 23:04:33.974563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.974736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-26 23:04:33.974762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.974930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-26 23:04:33.974955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.975123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-26 23:04:33.975149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.975345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-26 23:04:33.975371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.975544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-26 23:04:33.975570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.975764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-26 23:04:33.975790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-26 23:04:33.975990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-26 23:04:33.976016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-26 23:04:33.976194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-26 23:04:33.976220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-26 23:04:33.976362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-26 23:04:33.976388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-26 23:04:33.976559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-26 23:04:33.976584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-26 23:04:33.976760] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e80f0 is same with the state(5) to be set 00:34:41.768 [2024-07-26 23:04:33.976991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-26 23:04:33.977029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-26 23:04:33.977195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-26 23:04:33.977223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-26 23:04:33.977372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-26 23:04:33.977398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-26 23:04:33.977547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-26 23:04:33.977572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-26 23:04:33.977767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-26 23:04:33.977793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-26 23:04:33.977936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-26 23:04:33.977961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-26 23:04:33.978104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-26 23:04:33.978130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-26 23:04:33.978280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-26 23:04:33.978306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-26 23:04:33.978476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-26 23:04:33.978501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-26 23:04:33.978655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-26 23:04:33.978682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-26 23:04:33.978862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-26 23:04:33.978887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-26 23:04:33.979039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-26 23:04:33.979071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-26 23:04:33.979268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-26 23:04:33.979294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-26 23:04:33.979487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-26 23:04:33.979525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-26 23:04:33.979747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-26 23:04:33.979779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-26 23:04:33.979941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-26 23:04:33.979970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-26 23:04:33.980162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-26 23:04:33.980188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-26 23:04:33.980364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-26 23:04:33.980390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-26 23:04:33.980569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-26 23:04:33.980595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-26 23:04:33.980764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-26 23:04:33.980789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-26 23:04:33.980927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-26 23:04:33.980952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-26 23:04:33.981169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-26 23:04:33.981209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-26 23:04:33.981388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-26 23:04:33.981416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-26 23:04:33.981612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-26 23:04:33.981638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-26 23:04:33.981781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-26 23:04:33.981807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-26 23:04:33.981979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-26 23:04:33.982006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-26 23:04:33.982203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-26 23:04:33.982229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-26 23:04:33.982433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-26 23:04:33.982459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-26 23:04:33.982659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.982684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.982824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.982851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.983038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.983084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.983235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.983261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.983407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.983433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.983604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.983629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.983792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.983817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.983963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.983988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.984157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.984197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.984366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.984393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.984590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.984616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.984757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.984783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.984962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.984988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.985127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.985154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.985301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.985326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.985522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.985548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.985716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.985742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.985889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.985916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.986088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.986114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.986290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.986315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.986483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.986509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.986685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.986711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.986885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.986913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.987094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.987121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.987318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.987344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.987512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.987542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.987718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.987745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.987948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.987974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.988136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.988163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.988335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.988361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.988531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.988558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.988727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.988754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.988901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.988929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.989125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.989152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.989302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.989329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.989529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.989555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.989747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.989772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.989950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-26 23:04:33.989975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-26 23:04:33.990171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-26 23:04:33.990198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-26 23:04:33.990399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-26 23:04:33.990425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-26 23:04:33.990570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-26 23:04:33.990595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-26 23:04:33.990760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-26 23:04:33.990786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-26 23:04:33.990961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-26 23:04:33.990986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-26 23:04:33.991135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-26 23:04:33.991162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-26 23:04:33.991366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-26 23:04:33.991392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-26 23:04:33.991539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-26 23:04:33.991565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-26 23:04:33.991763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-26 23:04:33.991788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-26 23:04:33.991928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-26 23:04:33.991955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-26 23:04:33.992130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-26 23:04:33.992157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-26 23:04:33.992330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-26 23:04:33.992357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-26 23:04:33.992539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-26 23:04:33.992564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-26 23:04:33.992731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-26 23:04:33.992757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-26 23:04:33.992933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-26 23:04:33.992959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-26 23:04:33.993133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-26 23:04:33.993161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-26 23:04:33.993368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-26 23:04:33.993395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-26 23:04:33.993564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-26 23:04:33.993590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-26 23:04:33.993755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-26 23:04:33.993780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-26 23:04:33.993957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-26 23:04:33.993983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-26 23:04:33.994176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-26 23:04:33.994203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-26 23:04:33.994366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-26 23:04:33.994392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-26 23:04:33.994561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-26 23:04:33.994587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-26 23:04:33.994720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-26 23:04:33.994746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-26 23:04:33.994916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-26 23:04:33.994942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-26 23:04:33.995109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-26 23:04:33.995147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-26 23:04:33.995321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-26 23:04:33.995347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-26 23:04:33.995529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-26 23:04:33.995558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-26 23:04:33.995728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-26 23:04:33.995754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-26 23:04:33.995907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-26 23:04:33.995934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-26 23:04:33.996143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-26 23:04:33.996170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-26 23:04:33.996344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-26 23:04:33.996370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-26 23:04:33.996544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-26 23:04:33.996571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-26 23:04:33.996744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-26 23:04:33.996770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-26 23:04:33.996941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-26 23:04:33.996967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-26 23:04:33.997140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-26 23:04:33.997166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-26 23:04:33.997307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-26 23:04:33.997333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-26 23:04:33.997503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-26 23:04:33.997529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-26 23:04:33.997728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-26 23:04:33.997754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-26 23:04:33.997923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-26 23:04:33.997949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-26 23:04:33.998121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-26 23:04:33.998147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-26 23:04:33.998321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-26 23:04:33.998347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-26 23:04:33.998545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-26 23:04:33.998571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-26 23:04:33.998744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-26 23:04:33.998769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-26 23:04:33.998942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-26 23:04:33.998969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-26 23:04:33.999139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-26 23:04:33.999165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-26 23:04:33.999310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-26 23:04:33.999336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-26 23:04:33.999476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-26 23:04:33.999504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-26 23:04:33.999674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-26 23:04:33.999699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-26 23:04:33.999868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-26 23:04:33.999894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-26 23:04:34.000072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-26 23:04:34.000099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-26 23:04:34.000272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-26 23:04:34.000298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-26 23:04:34.000441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-26 23:04:34.000467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-26 23:04:34.000641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-26 23:04:34.000667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-26 23:04:34.000840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-26 23:04:34.000867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-26 23:04:34.001039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-26 23:04:34.001083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-26 23:04:34.001255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-26 23:04:34.001281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-26 23:04:34.001476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-26 23:04:34.001501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-26 23:04:34.001673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-26 23:04:34.001699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-26 23:04:34.001887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-26 23:04:34.001913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-26 23:04:34.002110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-26 23:04:34.002137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-26 23:04:34.002281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-26 23:04:34.002308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-26 23:04:34.002481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-26 23:04:34.002507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-26 23:04:34.002671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-26 23:04:34.002697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-26 23:04:34.002864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-26 23:04:34.002890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-26 23:04:34.003065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-26 23:04:34.003092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-26 23:04:34.003263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-26 23:04:34.003290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-26 23:04:34.003432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-26 23:04:34.003462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-26 23:04:34.003633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-26 23:04:34.003658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-26 23:04:34.003863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-26 23:04:34.003889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-26 23:04:34.004057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-26 23:04:34.004089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-26 23:04:34.004267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-26 23:04:34.004293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-26 23:04:34.004463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-26 23:04:34.004489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-26 23:04:34.004662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-26 23:04:34.004688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-26 23:04:34.004882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-26 23:04:34.004908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-26 23:04:34.005055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-26 23:04:34.005097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-26 23:04:34.005291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-26 23:04:34.005317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-26 23:04:34.005459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-26 23:04:34.005485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-26 23:04:34.005659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-26 23:04:34.005684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-26 23:04:34.005853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-26 23:04:34.005879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-26 23:04:34.006050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-26 23:04:34.006083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-26 23:04:34.006264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-26 23:04:34.006290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-26 23:04:34.006462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-26 23:04:34.006487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-26 23:04:34.006670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-26 23:04:34.006696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-26 23:04:34.006864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-26 23:04:34.006890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-26 23:04:34.007093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-26 23:04:34.007120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-26 23:04:34.007266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-26 23:04:34.007292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-26 23:04:34.007462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-26 23:04:34.007488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-26 23:04:34.007654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-26 23:04:34.007681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-26 23:04:34.007853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-26 23:04:34.007880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-26 23:04:34.008078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-26 23:04:34.008105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-26 23:04:34.008285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-26 23:04:34.008311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-26 23:04:34.008478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-26 23:04:34.008504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-26 23:04:34.008673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-26 23:04:34.008699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-26 23:04:34.008858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-26 23:04:34.008893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-26 23:04:34.009071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-26 23:04:34.009102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-26 23:04:34.009314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-26 23:04:34.009343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-26 23:04:34.009530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-26 23:04:34.009560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-26 23:04:34.009725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-26 23:04:34.009754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-26 23:04:34.009947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-26 23:04:34.009976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-26 23:04:34.010172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-26 23:04:34.010210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-26 23:04:34.010388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-26 23:04:34.010415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-26 23:04:34.010592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-26 23:04:34.010619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-26 23:04:34.010769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-26 23:04:34.010794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-26 23:04:34.010962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-26 23:04:34.010987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-26 23:04:34.011146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-26 23:04:34.011174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-26 23:04:34.011319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-26 23:04:34.011344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-26 23:04:34.011520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-26 23:04:34.011551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-26 23:04:34.011720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-26 23:04:34.011746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-26 23:04:34.011909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-26 23:04:34.011935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-26 23:04:34.012108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-26 23:04:34.012136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-26 23:04:34.012315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-26 23:04:34.012341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-26 23:04:34.012515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-26 23:04:34.012543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-26 23:04:34.012741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-26 23:04:34.012767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-26 23:04:34.012943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-26 23:04:34.012968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-26 23:04:34.013135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-26 23:04:34.013161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-26 23:04:34.013360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-26 23:04:34.013386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-26 23:04:34.013537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-26 23:04:34.013564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-26 23:04:34.013735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-26 23:04:34.013761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-26 23:04:34.013959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-26 23:04:34.013985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-26 23:04:34.014125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-26 23:04:34.014151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-26 23:04:34.014324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-26 23:04:34.014350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-26 23:04:34.014493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-26 23:04:34.014520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-26 23:04:34.014668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-26 23:04:34.014694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-26 23:04:34.014839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-26 23:04:34.014865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-26 23:04:34.015006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-26 23:04:34.015031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-26 23:04:34.015183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-26 23:04:34.015211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-26 23:04:34.015390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-26 23:04:34.015416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-26 23:04:34.015559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-26 23:04:34.015586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-26 23:04:34.015727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-26 23:04:34.015754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-26 23:04:34.015925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-26 23:04:34.015951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-26 23:04:34.016094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-26 23:04:34.016121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-26 23:04:34.016262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-26 23:04:34.016289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-26 23:04:34.016501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-26 23:04:34.016527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-26 23:04:34.016693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-26 23:04:34.016718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-26 23:04:34.016886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-26 23:04:34.016912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-26 23:04:34.017084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-26 23:04:34.017110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-26 23:04:34.017285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-26 23:04:34.017311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-26 23:04:34.017461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-26 23:04:34.017487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-26 23:04:34.017659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-26 23:04:34.017685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-26 23:04:34.017858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-26 23:04:34.017884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-26 23:04:34.018085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-26 23:04:34.018111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-26 23:04:34.018283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-26 23:04:34.018309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-26 23:04:34.018490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-26 23:04:34.018516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-26 23:04:34.018656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-26 23:04:34.018682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-26 23:04:34.018857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-26 23:04:34.018883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-26 23:04:34.019053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-26 23:04:34.019084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-26 23:04:34.019255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-26 23:04:34.019287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-26 23:04:34.019431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-26 23:04:34.019458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-26 23:04:34.019633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-26 23:04:34.019658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-26 23:04:34.019823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-26 23:04:34.019848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-26 23:04:34.020050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-26 23:04:34.020084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-26 23:04:34.020253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-26 23:04:34.020279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-26 23:04:34.020419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-26 23:04:34.020445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-26 23:04:34.020619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-26 23:04:34.020644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-26 23:04:34.020817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-26 23:04:34.020844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-26 23:04:34.021022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-26 23:04:34.021048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-26 23:04:34.021234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-26 23:04:34.021260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-26 23:04:34.021410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-26 23:04:34.021436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-26 23:04:34.021635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-26 23:04:34.021661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-26 23:04:34.021802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-26 23:04:34.021827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-26 23:04:34.022026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-26 23:04:34.022051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-26 23:04:34.022215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-26 23:04:34.022241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-26 23:04:34.022434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-26 23:04:34.022460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-26 23:04:34.022608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-26 23:04:34.022633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-26 23:04:34.022803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-26 23:04:34.022829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-26 23:04:34.022997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-26 23:04:34.023023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3699598 Killed "${NVMF_APP[@]}" "$@" 00:34:41.774 [2024-07-26 23:04:34.023199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-26 23:04:34.023225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-26 23:04:34.023400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-26 23:04:34.023425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:34:41.774 [2024-07-26 23:04:34.023592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-26 23:04:34.023618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:41.774 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:41.774 [2024-07-26 23:04:34.023792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-26 23:04:34.023819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-26 23:04:34.023963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:41.774 [2024-07-26 23:04:34.023989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:41.774 [2024-07-26 23:04:34.024195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-26 23:04:34.024222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-26 23:04:34.024370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-26 23:04:34.024396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-26 23:04:34.024614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-26 23:04:34.024639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-26 23:04:34.024835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-26 23:04:34.024861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-26 23:04:34.025030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-26 23:04:34.025056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-26 23:04:34.025213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-26 23:04:34.025240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-26 23:04:34.025411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-26 23:04:34.025437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-26 23:04:34.025629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-26 23:04:34.025655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-26 23:04:34.025852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-26 23:04:34.025879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-26 23:04:34.026052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-26 23:04:34.026084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-26 23:04:34.026261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-26 23:04:34.026287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-26 23:04:34.026438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-26 23:04:34.026464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-26 23:04:34.026635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-26 23:04:34.026662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-26 23:04:34.026828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-26 23:04:34.026853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-26 23:04:34.027052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-26 23:04:34.027083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-26 23:04:34.027259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-26 23:04:34.027285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-26 23:04:34.027420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-26 23:04:34.027445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-26 23:04:34.027588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-26 23:04:34.027613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-26 23:04:34.027791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-26 23:04:34.027816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-26 23:04:34.027964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-26 23:04:34.027990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-26 23:04:34.028142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-26 23:04:34.028168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-26 23:04:34.028308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-26 23:04:34.028334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-26 23:04:34.028477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-26 23:04:34.028503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-26 23:04:34.028680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-26 23:04:34.028706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3700148 00:34:41.775 [2024-07-26 23:04:34.028881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-26 23:04:34.028908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b9 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:41.775 0 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3700148 00:34:41.775 [2024-07-26 23:04:34.029085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-26 23:04:34.029118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3700148 ']' 00:34:41.775 [2024-07-26 23:04:34.029283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-26 23:04:34.029318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:41.775 [2024-07-26 23:04:34.029479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-26 23:04:34.029504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:41.775 [2024-07-26 23:04:34.029705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-26 23:04:34.029731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.775 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:41.775 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:41.775 [2024-07-26 23:04:34.029902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-26 23:04:34.029928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:41.775 [2024-07-26 23:04:34.030078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-26 23:04:34.030117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-26 23:04:34.030268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-26 23:04:34.030295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-26 23:04:34.030467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-26 23:04:34.030492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-26 23:04:34.030662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-26 23:04:34.030689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-26 23:04:34.030864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-26 23:04:34.030890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-26 23:04:34.031074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-26 23:04:34.031103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-26 23:04:34.031252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-26 23:04:34.031280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-26 23:04:34.031472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-26 23:04:34.031499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-26 23:04:34.031649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-26 23:04:34.031674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-26 23:04:34.031854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-26 23:04:34.031880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-26 23:04:34.032021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-26 23:04:34.032067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-26 23:04:34.032222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-26 23:04:34.032248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-26 23:04:34.032423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-26 23:04:34.032449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-26 23:04:34.032592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-26 23:04:34.032633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-26 23:04:34.032818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-26 23:04:34.032844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-26 23:04:34.033024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-26 23:04:34.033051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-26 23:04:34.033248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-26 23:04:34.033274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-26 23:04:34.033452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-26 23:04:34.033478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-26 23:04:34.033675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-26 23:04:34.033701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-26 23:04:34.033850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-26 23:04:34.033876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-26 23:04:34.034045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-26 23:04:34.034078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-26 23:04:34.034234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-26 23:04:34.034260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-26 23:04:34.034405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-26 23:04:34.034431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-26 23:04:34.034578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-26 23:04:34.034604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-26 23:04:34.034807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-26 23:04:34.034833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-26 23:04:34.035007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-26 23:04:34.035033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-26 23:04:34.035189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-26 23:04:34.035216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-26 23:04:34.035395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-26 23:04:34.035420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-26 23:04:34.035616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-26 23:04:34.035642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-26 23:04:34.035853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-26 23:04:34.035879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-26 23:04:34.036051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-26 23:04:34.036083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-26 23:04:34.036257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-26 23:04:34.036284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-26 23:04:34.036425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-26 23:04:34.036451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-26 23:04:34.036626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-26 23:04:34.036652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-26 23:04:34.036790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-26 23:04:34.036816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-26 23:04:34.036990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-26 23:04:34.037016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-26 23:04:34.037237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-26 23:04:34.037264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-26 23:04:34.037418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-26 23:04:34.037444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-26 23:04:34.037615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-26 23:04:34.037642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-26 23:04:34.037840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-26 23:04:34.037866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-26 23:04:34.038005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-26 23:04:34.038032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-26 23:04:34.038191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-26 23:04:34.038218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-26 23:04:34.038361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-26 23:04:34.038387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-26 23:04:34.038559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-26 23:04:34.038585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-26 23:04:34.038748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-26 23:04:34.038775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-26 23:04:34.038908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-26 23:04:34.038938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-26 23:04:34.039140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-26 23:04:34.039168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-26 23:04:34.039309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-26 23:04:34.039336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-26 23:04:34.039538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-26 23:04:34.039564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-26 23:04:34.039708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-26 23:04:34.039735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-26 23:04:34.039911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-26 23:04:34.039936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-26 23:04:34.040106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-26 23:04:34.040133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-26 23:04:34.040282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-26 23:04:34.040307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-26 23:04:34.040479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-26 23:04:34.040505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-26 23:04:34.040699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-26 23:04:34.040725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-26 23:04:34.040899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-26 23:04:34.040926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-26 23:04:34.041100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-26 23:04:34.041127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-26 23:04:34.041300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-26 23:04:34.041327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-26 23:04:34.041495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-26 23:04:34.041520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-26 23:04:34.041721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-26 23:04:34.041747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-26 23:04:34.041918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-26 23:04:34.041944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-26 23:04:34.042092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-26 23:04:34.042117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-26 23:04:34.042266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-26 23:04:34.042292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-26 23:04:34.042442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-26 23:04:34.042469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-26 23:04:34.042643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-26 23:04:34.042669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-26 23:04:34.042818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-26 23:04:34.042845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-26 23:04:34.042991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-26 23:04:34.043017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-26 23:04:34.043201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-26 23:04:34.043228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-26 23:04:34.043425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-26 23:04:34.043451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-26 23:04:34.043647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-26 23:04:34.043673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-26 23:04:34.043842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-26 23:04:34.043868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-26 23:04:34.044037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-26 23:04:34.044069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-26 23:04:34.044226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-26 23:04:34.044252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-26 23:04:34.044392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-26 23:04:34.044418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-26 23:04:34.044585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-26 23:04:34.044611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-26 23:04:34.044781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-26 23:04:34.044808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-26 23:04:34.044981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-26 23:04:34.045008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-26 23:04:34.045183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-26 23:04:34.045209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-26 23:04:34.045357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-26 23:04:34.045383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-26 23:04:34.045551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-26 23:04:34.045576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-26 23:04:34.045727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-26 23:04:34.045752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-26 23:04:34.045919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-26 23:04:34.045945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-26 23:04:34.046087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-26 23:04:34.046113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-26 23:04:34.046264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-26 23:04:34.046290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-26 23:04:34.046460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-26 23:04:34.046488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-26 23:04:34.046657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-26 23:04:34.046687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-26 23:04:34.046885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-26 23:04:34.046911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-26 23:04:34.047055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-26 23:04:34.047088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-26 23:04:34.047259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-26 23:04:34.047285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-26 23:04:34.047425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-26 23:04:34.047452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-26 23:04:34.047624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-26 23:04:34.047650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-26 23:04:34.047788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-26 23:04:34.047814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-26 23:04:34.048017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-26 23:04:34.048043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-26 23:04:34.048212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-26 23:04:34.048237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-26 23:04:34.048414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-26 23:04:34.048440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-26 23:04:34.048646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-26 23:04:34.048672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-26 23:04:34.048827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-26 23:04:34.048852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-26 23:04:34.049026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-26 23:04:34.049052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-26 23:04:34.049260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-26 23:04:34.049286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-26 23:04:34.049458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-26 23:04:34.049484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-26 23:04:34.049655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-26 23:04:34.049680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-26 23:04:34.049831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-26 23:04:34.049858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-26 23:04:34.050032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-26 23:04:34.050064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-26 23:04:34.050213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-26 23:04:34.050240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-26 23:04:34.050412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-26 23:04:34.050439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-26 23:04:34.050614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-26 23:04:34.050640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-26 23:04:34.050838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-26 23:04:34.050864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-26 23:04:34.051008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-26 23:04:34.051034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-26 23:04:34.051187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-26 23:04:34.051213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-26 23:04:34.051355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-26 23:04:34.051381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-26 23:04:34.051550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-26 23:04:34.051576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-26 23:04:34.051757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-26 23:04:34.051783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-26 23:04:34.051957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-26 23:04:34.051983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-26 23:04:34.052127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-26 23:04:34.052155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-26 23:04:34.052302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-26 23:04:34.052328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-26 23:04:34.052509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-26 23:04:34.052535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-26 23:04:34.052731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-26 23:04:34.052757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-26 23:04:34.052929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-26 23:04:34.052955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-26 23:04:34.053152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-26 23:04:34.053178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-26 23:04:34.053333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-26 23:04:34.053359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-26 23:04:34.053554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-26 23:04:34.053580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-26 23:04:34.053746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-26 23:04:34.053771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-26 23:04:34.053946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-26 23:04:34.053972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-26 23:04:34.054142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-26 23:04:34.054169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-26 23:04:34.054338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-26 23:04:34.054363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-26 23:04:34.054537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-26 23:04:34.054568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-26 23:04:34.054708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-26 23:04:34.054733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-26 23:04:34.054931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-26 23:04:34.054957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-26 23:04:34.055131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-26 23:04:34.055158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-26 23:04:34.055327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-26 23:04:34.055354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-26 23:04:34.055524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-26 23:04:34.055551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-26 23:04:34.055726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-26 23:04:34.055752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-26 23:04:34.055951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-26 23:04:34.055977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-26 23:04:34.056168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-26 23:04:34.056194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-26 23:04:34.056334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-26 23:04:34.056359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-26 23:04:34.056525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-26 23:04:34.056551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-26 23:04:34.056717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-26 23:04:34.056743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-26 23:04:34.056937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-26 23:04:34.056963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-26 23:04:34.057163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-26 23:04:34.057190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-26 23:04:34.057368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-26 23:04:34.057394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-26 23:04:34.057559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-26 23:04:34.057585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-26 23:04:34.057745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-26 23:04:34.057771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-26 23:04:34.057922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-26 23:04:34.057948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-26 23:04:34.058127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-26 23:04:34.058154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-26 23:04:34.058300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-26 23:04:34.058326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-26 23:04:34.058499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-26 23:04:34.058525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-26 23:04:34.058701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-26 23:04:34.058727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-26 23:04:34.058866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-26 23:04:34.058893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-26 23:04:34.059068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-26 23:04:34.059095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-26 23:04:34.059266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-26 23:04:34.059293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-26 23:04:34.059467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-26 23:04:34.059493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-26 23:04:34.059645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-26 23:04:34.059671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-26 23:04:34.059826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-26 23:04:34.059853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-26 23:04:34.059993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-26 23:04:34.060019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-26 23:04:34.060185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-26 23:04:34.060212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-26 23:04:34.060420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-26 23:04:34.060447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-26 23:04:34.060643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-26 23:04:34.060669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-26 23:04:34.060817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-26 23:04:34.060844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-26 23:04:34.061013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-26 23:04:34.061039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-26 23:04:34.061195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-26 23:04:34.061221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-26 23:04:34.061397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-26 23:04:34.061423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-26 23:04:34.061622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-26 23:04:34.061648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-26 23:04:34.061788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-26 23:04:34.061814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-26 23:04:34.061983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-26 23:04:34.062010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-26 23:04:34.062181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-26 23:04:34.062209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-26 23:04:34.062383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-26 23:04:34.062413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-26 23:04:34.062607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-26 23:04:34.062633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-26 23:04:34.062835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-26 23:04:34.062861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-26 23:04:34.063002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-26 23:04:34.063028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-26 23:04:34.063214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-26 23:04:34.063240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-26 23:04:34.063414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-26 23:04:34.063440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-26 23:04:34.063636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-26 23:04:34.063661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-26 23:04:34.063832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-26 23:04:34.063858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-26 23:04:34.063997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-26 23:04:34.064022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-26 23:04:34.064183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-26 23:04:34.064209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-26 23:04:34.064352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-26 23:04:34.064378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-26 23:04:34.064550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-26 23:04:34.064577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-26 23:04:34.064748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-26 23:04:34.064774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-26 23:04:34.064914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-26 23:04:34.064941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-26 23:04:34.065147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-26 23:04:34.065173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-26 23:04:34.065311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-26 23:04:34.065337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-26 23:04:34.065530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-26 23:04:34.065555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-26 23:04:34.065751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-26 23:04:34.065777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-26 23:04:34.065920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-26 23:04:34.065945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-26 23:04:34.066111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-26 23:04:34.066138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-26 23:04:34.066314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-26 23:04:34.066340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-26 23:04:34.066506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-26 23:04:34.066532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-26 23:04:34.066673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-26 23:04:34.066699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-26 23:04:34.066871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-26 23:04:34.066897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-26 23:04:34.067038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-26 23:04:34.067067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-26 23:04:34.067240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-26 23:04:34.067265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-26 23:04:34.067437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-26 23:04:34.067465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-26 23:04:34.067611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-26 23:04:34.067637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-26 23:04:34.067804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-26 23:04:34.067829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-26 23:04:34.068010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-26 23:04:34.068035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-26 23:04:34.068240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-26 23:04:34.068266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-26 23:04:34.068443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-26 23:04:34.068469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-26 23:04:34.068642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-26 23:04:34.068668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-26 23:04:34.068838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-26 23:04:34.068863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-26 23:04:34.069013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-26 23:04:34.069038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-26 23:04:34.069237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-26 23:04:34.069263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-26 23:04:34.069449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-26 23:04:34.069475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-26 23:04:34.069626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-26 23:04:34.069651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-26 23:04:34.069821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-26 23:04:34.069846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-26 23:04:34.070023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-26 23:04:34.070050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-26 23:04:34.070243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-26 23:04:34.070273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-26 23:04:34.070454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-26 23:04:34.070481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-26 23:04:34.070682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-26 23:04:34.070708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-26 23:04:34.070903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-26 23:04:34.070929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-26 23:04:34.071098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-26 23:04:34.071125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-26 23:04:34.071325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-26 23:04:34.071351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-26 23:04:34.071518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-26 23:04:34.071544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-26 23:04:34.071717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-26 23:04:34.071743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-26 23:04:34.071882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-26 23:04:34.071907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-26 23:04:34.072104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-26 23:04:34.072130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-26 23:04:34.072273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-26 23:04:34.072299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-26 23:04:34.072508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-26 23:04:34.072534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-26 23:04:34.072711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-26 23:04:34.072737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-26 23:04:34.072907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-26 23:04:34.072933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-26 23:04:34.073109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-26 23:04:34.073136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-26 23:04:34.073313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-26 23:04:34.073347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-26 23:04:34.073516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-26 23:04:34.073543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-26 23:04:34.073683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-26 23:04:34.073709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-26 23:04:34.073877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-26 23:04:34.073903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-26 23:04:34.074071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-26 23:04:34.074097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-26 23:04:34.074240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-26 23:04:34.074267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-26 23:04:34.074468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-26 23:04:34.074494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-26 23:04:34.074641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-26 23:04:34.074667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-26 23:04:34.074864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-26 23:04:34.074889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-26 23:04:34.075031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-26 23:04:34.075056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-26 23:04:34.075246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-26 23:04:34.075272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-26 23:04:34.075434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-26 23:04:34.075460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-26 23:04:34.075610] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:34:41.782 [2024-07-26 23:04:34.075637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-26 23:04:34.075664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-26 23:04:34.075686] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:41.782 [2024-07-26 23:04:34.075840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-26 23:04:34.075866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-26 23:04:34.076033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-26 23:04:34.076065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-26 23:04:34.076233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-26 23:04:34.076259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-26 23:04:34.076401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-26 23:04:34.076427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-26 23:04:34.076623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-26 23:04:34.076649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-26 23:04:34.076788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-26 23:04:34.076814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-26 23:04:34.076984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-26 23:04:34.077011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-26 23:04:34.077188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-26 23:04:34.077215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-26 23:04:34.077361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-26 23:04:34.077387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-26 23:04:34.077533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-26 23:04:34.077559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-26 23:04:34.077706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-26 23:04:34.077732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-26 23:04:34.077904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-26 23:04:34.077935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-26 23:04:34.078105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-26 23:04:34.078132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-26 23:04:34.078295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-26 23:04:34.078321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-26 23:04:34.078466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-26 23:04:34.078492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-26 23:04:34.078668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-26 23:04:34.078693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-26 23:04:34.078860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-26 23:04:34.078887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-26 23:04:34.079086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-26 23:04:34.079112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-26 23:04:34.079257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-26 23:04:34.079283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-26 23:04:34.079482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.783 [2024-07-26 23:04:34.079508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.783 qpair failed and we were unable to recover it. 00:34:41.783 [2024-07-26 23:04:34.079666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.783 [2024-07-26 23:04:34.079693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.783 qpair failed and we were unable to recover it. 00:34:41.783 [2024-07-26 23:04:34.079837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.783 [2024-07-26 23:04:34.079863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.783 qpair failed and we were unable to recover it. 00:34:41.783 [2024-07-26 23:04:34.080009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.783 [2024-07-26 23:04:34.080034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.783 qpair failed and we were unable to recover it. 00:34:41.783 [2024-07-26 23:04:34.080205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.783 [2024-07-26 23:04:34.080231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.783 qpair failed and we were unable to recover it. 00:34:41.783 [2024-07-26 23:04:34.080401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.783 [2024-07-26 23:04:34.080427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.783 qpair failed and we were unable to recover it. 00:34:41.783 [2024-07-26 23:04:34.080604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.783 [2024-07-26 23:04:34.080630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.783 qpair failed and we were unable to recover it. 00:34:41.783 [2024-07-26 23:04:34.080800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.783 [2024-07-26 23:04:34.080826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.783 qpair failed and we were unable to recover it. 00:34:41.783 [2024-07-26 23:04:34.080992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.783 [2024-07-26 23:04:34.081018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.783 qpair failed and we were unable to recover it. 00:34:41.783 [2024-07-26 23:04:34.081165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.783 [2024-07-26 23:04:34.081193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.783 qpair failed and we were unable to recover it. 00:34:41.783 [2024-07-26 23:04:34.081355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.783 [2024-07-26 23:04:34.081381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.783 qpair failed and we were unable to recover it. 00:34:41.783 [2024-07-26 23:04:34.081551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.783 [2024-07-26 23:04:34.081577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.783 qpair failed and we were unable to recover it. 00:34:41.783 [2024-07-26 23:04:34.081776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.783 [2024-07-26 23:04:34.081802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.783 qpair failed and we were unable to recover it. 00:34:41.783 [2024-07-26 23:04:34.081971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.783 [2024-07-26 23:04:34.081999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.783 qpair failed and we were unable to recover it. 00:34:41.783 [2024-07-26 23:04:34.082171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.783 [2024-07-26 23:04:34.082198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.783 qpair failed and we were unable to recover it. 00:34:41.783 [2024-07-26 23:04:34.082372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.783 [2024-07-26 23:04:34.082398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.783 qpair failed and we were unable to recover it. 00:34:41.783 [2024-07-26 23:04:34.082569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.783 [2024-07-26 23:04:34.082596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.783 qpair failed and we were unable to recover it. 00:34:41.783 [2024-07-26 23:04:34.082768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.783 [2024-07-26 23:04:34.082794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.783 qpair failed and we were unable to recover it. 00:34:41.783 [2024-07-26 23:04:34.082969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.783 [2024-07-26 23:04:34.082995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.783 qpair failed and we were unable to recover it. 00:34:41.783 [2024-07-26 23:04:34.083180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.783 [2024-07-26 23:04:34.083207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.783 qpair failed and we were unable to recover it. 00:34:41.783 [2024-07-26 23:04:34.083406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.783 [2024-07-26 23:04:34.083432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.783 qpair failed and we were unable to recover it. 00:34:41.783 [2024-07-26 23:04:34.083604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.783 [2024-07-26 23:04:34.083630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.783 qpair failed and we were unable to recover it. 00:34:41.783 [2024-07-26 23:04:34.083773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.783 [2024-07-26 23:04:34.083799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.783 qpair failed and we were unable to recover it. 00:34:41.783 [2024-07-26 23:04:34.083995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.783 [2024-07-26 23:04:34.084022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.783 qpair failed and we were unable to recover it. 00:34:41.783 [2024-07-26 23:04:34.084221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.783 [2024-07-26 23:04:34.084247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.783 qpair failed and we were unable to recover it. 00:34:41.783 [2024-07-26 23:04:34.084416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.783 [2024-07-26 23:04:34.084442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.783 qpair failed and we were unable to recover it. 00:34:41.783 [2024-07-26 23:04:34.084610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.783 [2024-07-26 23:04:34.084635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.783 qpair failed and we were unable to recover it. 00:34:41.783 [2024-07-26 23:04:34.084839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.783 [2024-07-26 23:04:34.084865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.783 qpair failed and we were unable to recover it. 00:34:41.783 [2024-07-26 23:04:34.085064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.783 [2024-07-26 23:04:34.085090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.783 qpair failed and we were unable to recover it. 00:34:41.783 [2024-07-26 23:04:34.085259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.783 [2024-07-26 23:04:34.085285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.783 qpair failed and we were unable to recover it. 00:34:41.783 [2024-07-26 23:04:34.085491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.783 [2024-07-26 23:04:34.085516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.783 qpair failed and we were unable to recover it. 00:34:41.783 [2024-07-26 23:04:34.085694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.783 [2024-07-26 23:04:34.085719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.783 qpair failed and we were unable to recover it. 00:34:41.783 [2024-07-26 23:04:34.085890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.783 [2024-07-26 23:04:34.085930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.783 qpair failed and we were unable to recover it. 00:34:41.783 [2024-07-26 23:04:34.086072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.783 [2024-07-26 23:04:34.086099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.783 qpair failed and we were unable to recover it. 00:34:41.783 [2024-07-26 23:04:34.086293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.783 [2024-07-26 23:04:34.086319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.783 qpair failed and we were unable to recover it. 00:34:41.783 [2024-07-26 23:04:34.086462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.783 [2024-07-26 23:04:34.086488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.783 qpair failed and we were unable to recover it. 00:34:41.783 [2024-07-26 23:04:34.086659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.783 [2024-07-26 23:04:34.086685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.783 qpair failed and we were unable to recover it. 00:34:41.784 [2024-07-26 23:04:34.086854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-07-26 23:04:34.086879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.784 qpair failed and we were unable to recover it. 00:34:41.784 [2024-07-26 23:04:34.087042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-07-26 23:04:34.087074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.784 qpair failed and we were unable to recover it. 00:34:41.784 [2024-07-26 23:04:34.087249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-07-26 23:04:34.087275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.784 qpair failed and we were unable to recover it. 00:34:41.784 [2024-07-26 23:04:34.087448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-07-26 23:04:34.087473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.784 qpair failed and we were unable to recover it. 00:34:41.784 [2024-07-26 23:04:34.087672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-07-26 23:04:34.087698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.784 qpair failed and we were unable to recover it. 00:34:41.784 [2024-07-26 23:04:34.087843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-07-26 23:04:34.087869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.784 qpair failed and we were unable to recover it. 00:34:41.784 [2024-07-26 23:04:34.088011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-07-26 23:04:34.088038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.784 qpair failed and we were unable to recover it. 00:34:41.784 [2024-07-26 23:04:34.088240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-07-26 23:04:34.088266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.784 qpair failed and we were unable to recover it. 00:34:41.784 [2024-07-26 23:04:34.088436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-07-26 23:04:34.088462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.784 qpair failed and we were unable to recover it. 00:34:41.784 [2024-07-26 23:04:34.088638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-07-26 23:04:34.088664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.784 qpair failed and we were unable to recover it. 00:34:41.784 [2024-07-26 23:04:34.088871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-07-26 23:04:34.088897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.784 qpair failed and we were unable to recover it. 00:34:41.784 [2024-07-26 23:04:34.089073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-07-26 23:04:34.089100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.784 qpair failed and we were unable to recover it. 00:34:41.784 [2024-07-26 23:04:34.089277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-07-26 23:04:34.089303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.784 qpair failed and we were unable to recover it. 00:34:41.784 [2024-07-26 23:04:34.089501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-07-26 23:04:34.089527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.784 qpair failed and we were unable to recover it. 00:34:41.784 [2024-07-26 23:04:34.089698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-07-26 23:04:34.089724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.784 qpair failed and we were unable to recover it. 00:34:41.784 [2024-07-26 23:04:34.089922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-07-26 23:04:34.089947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.784 qpair failed and we were unable to recover it. 00:34:41.784 [2024-07-26 23:04:34.090119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-07-26 23:04:34.090146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.784 qpair failed and we were unable to recover it. 00:34:41.784 [2024-07-26 23:04:34.090312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-07-26 23:04:34.090338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.784 qpair failed and we were unable to recover it. 00:34:41.784 [2024-07-26 23:04:34.090537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-07-26 23:04:34.090563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.784 qpair failed and we were unable to recover it. 00:34:41.784 [2024-07-26 23:04:34.090701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-07-26 23:04:34.090726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.784 qpair failed and we were unable to recover it. 00:34:41.784 [2024-07-26 23:04:34.090927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-07-26 23:04:34.090954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.784 qpair failed and we were unable to recover it. 00:34:41.784 [2024-07-26 23:04:34.091124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-07-26 23:04:34.091150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.784 qpair failed and we were unable to recover it. 00:34:41.784 [2024-07-26 23:04:34.091333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-07-26 23:04:34.091360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.784 qpair failed and we were unable to recover it. 00:34:41.784 [2024-07-26 23:04:34.091534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-07-26 23:04:34.091560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.784 qpair failed and we were unable to recover it. 00:34:41.784 [2024-07-26 23:04:34.091735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-07-26 23:04:34.091761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.784 qpair failed and we were unable to recover it. 00:34:41.784 [2024-07-26 23:04:34.091936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-07-26 23:04:34.091963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.784 qpair failed and we were unable to recover it. 00:34:41.784 [2024-07-26 23:04:34.092158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-07-26 23:04:34.092185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.784 qpair failed and we were unable to recover it. 00:34:41.784 [2024-07-26 23:04:34.092353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-07-26 23:04:34.092379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.784 qpair failed and we were unable to recover it. 00:34:41.784 [2024-07-26 23:04:34.092546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-07-26 23:04:34.092572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.784 qpair failed and we were unable to recover it. 00:34:41.784 [2024-07-26 23:04:34.092739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-07-26 23:04:34.092765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.784 qpair failed and we were unable to recover it. 00:34:41.784 [2024-07-26 23:04:34.092934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-07-26 23:04:34.092960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.784 qpair failed and we were unable to recover it. 00:34:41.784 [2024-07-26 23:04:34.093127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-07-26 23:04:34.093153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.784 qpair failed and we were unable to recover it. 00:34:41.784 [2024-07-26 23:04:34.093349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-07-26 23:04:34.093375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.784 qpair failed and we were unable to recover it. 00:34:41.784 [2024-07-26 23:04:34.093548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-07-26 23:04:34.093574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.784 qpair failed and we were unable to recover it. 00:34:41.784 [2024-07-26 23:04:34.093749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-07-26 23:04:34.093775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.784 qpair failed and we were unable to recover it. 00:34:41.784 [2024-07-26 23:04:34.093926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.784 [2024-07-26 23:04:34.093955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.784 qpair failed and we were unable to recover it. 00:34:41.784 [2024-07-26 23:04:34.094104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-07-26 23:04:34.094132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.785 qpair failed and we were unable to recover it. 00:34:41.785 [2024-07-26 23:04:34.094327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-07-26 23:04:34.094353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.785 qpair failed and we were unable to recover it. 00:34:41.785 [2024-07-26 23:04:34.094496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-07-26 23:04:34.094522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.785 qpair failed and we were unable to recover it. 00:34:41.785 [2024-07-26 23:04:34.094697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-07-26 23:04:34.094723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.785 qpair failed and we were unable to recover it. 00:34:41.785 [2024-07-26 23:04:34.094866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-07-26 23:04:34.094892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.785 qpair failed and we were unable to recover it. 00:34:41.785 [2024-07-26 23:04:34.095065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-07-26 23:04:34.095091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.785 qpair failed and we were unable to recover it. 00:34:41.785 [2024-07-26 23:04:34.095265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-07-26 23:04:34.095291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.785 qpair failed and we were unable to recover it. 00:34:41.785 [2024-07-26 23:04:34.095438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-07-26 23:04:34.095464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.785 qpair failed and we were unable to recover it. 00:34:41.785 [2024-07-26 23:04:34.095634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-07-26 23:04:34.095660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.785 qpair failed and we were unable to recover it. 00:34:41.785 [2024-07-26 23:04:34.095827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-07-26 23:04:34.095852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.785 qpair failed and we were unable to recover it. 00:34:41.785 [2024-07-26 23:04:34.096019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-07-26 23:04:34.096045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.785 qpair failed and we were unable to recover it. 00:34:41.785 [2024-07-26 23:04:34.096189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-07-26 23:04:34.096215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.785 qpair failed and we were unable to recover it. 00:34:41.785 [2024-07-26 23:04:34.096385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-07-26 23:04:34.096411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.785 qpair failed and we were unable to recover it. 00:34:41.785 [2024-07-26 23:04:34.096617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-07-26 23:04:34.096644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.785 qpair failed and we were unable to recover it. 00:34:41.785 [2024-07-26 23:04:34.096819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-07-26 23:04:34.096845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.785 qpair failed and we were unable to recover it. 00:34:41.785 [2024-07-26 23:04:34.096988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-07-26 23:04:34.097015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.785 qpair failed and we were unable to recover it. 00:34:41.785 [2024-07-26 23:04:34.097190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-07-26 23:04:34.097217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.785 qpair failed and we were unable to recover it. 00:34:41.785 [2024-07-26 23:04:34.097418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-07-26 23:04:34.097444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.785 qpair failed and we were unable to recover it. 00:34:41.785 [2024-07-26 23:04:34.097610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-07-26 23:04:34.097636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.785 qpair failed and we were unable to recover it. 00:34:41.785 [2024-07-26 23:04:34.097808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-07-26 23:04:34.097835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.785 qpair failed and we were unable to recover it. 00:34:41.785 [2024-07-26 23:04:34.098009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-07-26 23:04:34.098035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.785 qpair failed and we were unable to recover it. 00:34:41.785 [2024-07-26 23:04:34.098197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-07-26 23:04:34.098224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.785 qpair failed and we were unable to recover it. 00:34:41.785 [2024-07-26 23:04:34.098389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-07-26 23:04:34.098415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.785 qpair failed and we were unable to recover it. 00:34:41.785 [2024-07-26 23:04:34.098581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-07-26 23:04:34.098607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.785 qpair failed and we were unable to recover it. 00:34:41.785 [2024-07-26 23:04:34.098779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-07-26 23:04:34.098805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.785 qpair failed and we were unable to recover it. 00:34:41.785 [2024-07-26 23:04:34.098948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-07-26 23:04:34.098975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.785 qpair failed and we were unable to recover it. 00:34:41.785 [2024-07-26 23:04:34.099182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-07-26 23:04:34.099209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.785 qpair failed and we were unable to recover it. 00:34:41.785 [2024-07-26 23:04:34.099357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-07-26 23:04:34.099382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.785 qpair failed and we were unable to recover it. 00:34:41.785 [2024-07-26 23:04:34.099528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-07-26 23:04:34.099555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.785 qpair failed and we were unable to recover it. 00:34:41.785 [2024-07-26 23:04:34.099730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-07-26 23:04:34.099756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.785 qpair failed and we were unable to recover it. 00:34:41.785 [2024-07-26 23:04:34.099932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-07-26 23:04:34.099957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.785 qpair failed and we were unable to recover it. 00:34:41.785 [2024-07-26 23:04:34.100125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-07-26 23:04:34.100151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.785 qpair failed and we were unable to recover it. 00:34:41.785 [2024-07-26 23:04:34.100348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-07-26 23:04:34.100374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.785 qpair failed and we were unable to recover it. 00:34:41.785 [2024-07-26 23:04:34.100512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-07-26 23:04:34.100538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.785 qpair failed and we were unable to recover it. 00:34:41.785 [2024-07-26 23:04:34.100705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-07-26 23:04:34.100731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.785 qpair failed and we were unable to recover it. 00:34:41.785 [2024-07-26 23:04:34.100900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-07-26 23:04:34.100925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.785 qpair failed and we were unable to recover it. 00:34:41.785 [2024-07-26 23:04:34.101124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.785 [2024-07-26 23:04:34.101149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.785 qpair failed and we were unable to recover it. 00:34:41.785 [2024-07-26 23:04:34.101321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.101347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.786 qpair failed and we were unable to recover it. 00:34:41.786 [2024-07-26 23:04:34.101546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.101572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.786 qpair failed and we were unable to recover it. 00:34:41.786 [2024-07-26 23:04:34.101757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.101787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.786 qpair failed and we were unable to recover it. 00:34:41.786 [2024-07-26 23:04:34.101941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.101967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.786 qpair failed and we were unable to recover it. 00:34:41.786 [2024-07-26 23:04:34.102162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.102188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.786 qpair failed and we were unable to recover it. 00:34:41.786 [2024-07-26 23:04:34.102375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.102400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.786 qpair failed and we were unable to recover it. 00:34:41.786 [2024-07-26 23:04:34.102539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.102564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.786 qpair failed and we were unable to recover it. 00:34:41.786 [2024-07-26 23:04:34.102741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.102767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.786 qpair failed and we were unable to recover it. 00:34:41.786 [2024-07-26 23:04:34.102949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.102975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.786 qpair failed and we were unable to recover it. 00:34:41.786 [2024-07-26 23:04:34.103125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.103153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.786 qpair failed and we were unable to recover it. 00:34:41.786 [2024-07-26 23:04:34.103297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.103324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.786 qpair failed and we were unable to recover it. 00:34:41.786 [2024-07-26 23:04:34.103491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.103517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.786 qpair failed and we were unable to recover it. 00:34:41.786 [2024-07-26 23:04:34.103693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.103719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.786 qpair failed and we were unable to recover it. 00:34:41.786 [2024-07-26 23:04:34.103868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.103893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.786 qpair failed and we were unable to recover it. 00:34:41.786 [2024-07-26 23:04:34.104108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.104134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.786 qpair failed and we were unable to recover it. 00:34:41.786 [2024-07-26 23:04:34.104300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.104326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.786 qpair failed and we were unable to recover it. 00:34:41.786 [2024-07-26 23:04:34.104529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.104554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.786 qpair failed and we were unable to recover it. 00:34:41.786 [2024-07-26 23:04:34.104758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.104783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.786 qpair failed and we were unable to recover it. 00:34:41.786 [2024-07-26 23:04:34.104963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.104989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.786 qpair failed and we were unable to recover it. 00:34:41.786 [2024-07-26 23:04:34.105134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.105160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.786 qpair failed and we were unable to recover it. 00:34:41.786 [2024-07-26 23:04:34.105331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.105357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.786 qpair failed and we were unable to recover it. 00:34:41.786 [2024-07-26 23:04:34.105503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.105529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.786 qpair failed and we were unable to recover it. 00:34:41.786 [2024-07-26 23:04:34.105675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.105701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.786 qpair failed and we were unable to recover it. 00:34:41.786 [2024-07-26 23:04:34.105851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.105877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.786 qpair failed and we were unable to recover it. 00:34:41.786 [2024-07-26 23:04:34.106042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.106084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.786 qpair failed and we were unable to recover it. 00:34:41.786 [2024-07-26 23:04:34.106255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.106281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.786 qpair failed and we were unable to recover it. 00:34:41.786 [2024-07-26 23:04:34.106490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.106515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.786 qpair failed and we were unable to recover it. 00:34:41.786 [2024-07-26 23:04:34.106727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.106752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.786 qpair failed and we were unable to recover it. 00:34:41.786 [2024-07-26 23:04:34.106926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.106952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.786 qpair failed and we were unable to recover it. 00:34:41.786 [2024-07-26 23:04:34.107180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.107207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.786 qpair failed and we were unable to recover it. 00:34:41.786 [2024-07-26 23:04:34.107352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.107377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.786 qpair failed and we were unable to recover it. 00:34:41.786 [2024-07-26 23:04:34.107568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.107594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.786 qpair failed and we were unable to recover it. 00:34:41.786 [2024-07-26 23:04:34.107793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.107819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.786 qpair failed and we were unable to recover it. 00:34:41.786 [2024-07-26 23:04:34.108028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.108054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.786 qpair failed and we were unable to recover it. 00:34:41.786 [2024-07-26 23:04:34.108259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.108285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.786 qpair failed and we were unable to recover it. 00:34:41.786 [2024-07-26 23:04:34.108441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.108466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.786 qpair failed and we were unable to recover it. 00:34:41.786 [2024-07-26 23:04:34.108674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.108700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.786 qpair failed and we were unable to recover it. 00:34:41.786 [2024-07-26 23:04:34.108894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.786 [2024-07-26 23:04:34.108920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.787 qpair failed and we were unable to recover it. 00:34:41.787 [2024-07-26 23:04:34.109071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.787 [2024-07-26 23:04:34.109098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.787 qpair failed and we were unable to recover it. 00:34:41.787 EAL: No free 2048 kB hugepages reported on node 1 00:34:41.787 [2024-07-26 23:04:34.109276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.787 [2024-07-26 23:04:34.109302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.787 qpair failed and we were unable to recover it. 00:34:41.787 [2024-07-26 23:04:34.109443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.787 [2024-07-26 23:04:34.109469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.787 qpair failed and we were unable to recover it. 00:34:41.787 [2024-07-26 23:04:34.109606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.787 [2024-07-26 23:04:34.109631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.787 qpair failed and we were unable to recover it. 00:34:41.787 [2024-07-26 23:04:34.109856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.787 [2024-07-26 23:04:34.109882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.787 qpair failed and we were unable to recover it. 00:34:41.787 [2024-07-26 23:04:34.110068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.787 [2024-07-26 23:04:34.110094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.787 qpair failed and we were unable to recover it. 00:34:41.787 [2024-07-26 23:04:34.110231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.787 [2024-07-26 23:04:34.110257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.787 qpair failed and we were unable to recover it. 00:34:41.787 [2024-07-26 23:04:34.110458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.787 [2024-07-26 23:04:34.110484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.787 qpair failed and we were unable to recover it. 00:34:41.787 [2024-07-26 23:04:34.110699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.787 [2024-07-26 23:04:34.110724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.787 qpair failed and we were unable to recover it. 00:34:41.787 [2024-07-26 23:04:34.110907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.787 [2024-07-26 23:04:34.110933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.787 qpair failed and we were unable to recover it. 00:34:41.787 [2024-07-26 23:04:34.111108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.787 [2024-07-26 23:04:34.111135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.787 qpair failed and we were unable to recover it. 00:34:41.787 [2024-07-26 23:04:34.111302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.787 [2024-07-26 23:04:34.111327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.787 qpair failed and we were unable to recover it. 00:34:41.787 [2024-07-26 23:04:34.111471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.787 [2024-07-26 23:04:34.111496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.787 qpair failed and we were unable to recover it. 00:34:41.787 [2024-07-26 23:04:34.111647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.787 [2024-07-26 23:04:34.111674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.787 qpair failed and we were unable to recover it. 00:34:41.787 [2024-07-26 23:04:34.111863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.787 [2024-07-26 23:04:34.111889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.787 qpair failed and we were unable to recover it. 00:34:41.787 [2024-07-26 23:04:34.112042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.787 [2024-07-26 23:04:34.112073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.787 qpair failed and we were unable to recover it. 00:34:41.787 [2024-07-26 23:04:34.112218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.787 [2024-07-26 23:04:34.112245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.787 qpair failed and we were unable to recover it. 00:34:41.787 [2024-07-26 23:04:34.112406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.787 [2024-07-26 23:04:34.112436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.787 qpair failed and we were unable to recover it. 00:34:41.787 [2024-07-26 23:04:34.112616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.787 [2024-07-26 23:04:34.112642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.787 qpair failed and we were unable to recover it. 00:34:41.787 [2024-07-26 23:04:34.112791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.787 [2024-07-26 23:04:34.112816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.787 qpair failed and we were unable to recover it. 00:34:41.787 [2024-07-26 23:04:34.113095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.787 [2024-07-26 23:04:34.113121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.787 qpair failed and we were unable to recover it. 00:34:41.787 [2024-07-26 23:04:34.113268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.787 [2024-07-26 23:04:34.113294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.787 qpair failed and we were unable to recover it. 00:34:41.787 [2024-07-26 23:04:34.113456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.787 [2024-07-26 23:04:34.113483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.787 qpair failed and we were unable to recover it. 00:34:41.787 [2024-07-26 23:04:34.113655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.787 [2024-07-26 23:04:34.113681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.787 qpair failed and we were unable to recover it. 00:34:41.787 [2024-07-26 23:04:34.113830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.787 [2024-07-26 23:04:34.113858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.787 qpair failed and we were unable to recover it. 00:34:41.787 [2024-07-26 23:04:34.114027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.787 [2024-07-26 23:04:34.114053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.787 qpair failed and we were unable to recover it. 00:34:41.787 [2024-07-26 23:04:34.114207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.787 [2024-07-26 23:04:34.114234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.787 qpair failed and we were unable to recover it. 00:34:41.787 [2024-07-26 23:04:34.114406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.787 [2024-07-26 23:04:34.114433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.787 qpair failed and we were unable to recover it. 00:34:41.787 [2024-07-26 23:04:34.114582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.787 [2024-07-26 23:04:34.114609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.787 qpair failed and we were unable to recover it. 00:34:41.787 [2024-07-26 23:04:34.114778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.787 [2024-07-26 23:04:34.114805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.787 qpair failed and we were unable to recover it. 00:34:41.787 [2024-07-26 23:04:34.114958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.787 [2024-07-26 23:04:34.114984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.787 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-26 23:04:34.115138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-26 23:04:34.115165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-26 23:04:34.115340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-26 23:04:34.115370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-26 23:04:34.115510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-26 23:04:34.115535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-26 23:04:34.115680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-26 23:04:34.115706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-26 23:04:34.115876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-26 23:04:34.115902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-26 23:04:34.116099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-26 23:04:34.116125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-26 23:04:34.116292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-26 23:04:34.116318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-26 23:04:34.116518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-26 23:04:34.116544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-26 23:04:34.116712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-26 23:04:34.116739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-26 23:04:34.116877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-26 23:04:34.116904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-26 23:04:34.117049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-26 23:04:34.117081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-26 23:04:34.117281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-26 23:04:34.117307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-26 23:04:34.117524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-26 23:04:34.117550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-26 23:04:34.117755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-26 23:04:34.117781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-26 23:04:34.117956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-26 23:04:34.117982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-26 23:04:34.118156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-26 23:04:34.118183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-26 23:04:34.118354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-26 23:04:34.118379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-26 23:04:34.118578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-26 23:04:34.118604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-26 23:04:34.118780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-26 23:04:34.118806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-26 23:04:34.118974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-26 23:04:34.119000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-26 23:04:34.119154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-26 23:04:34.119180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-26 23:04:34.119352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-26 23:04:34.119377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-26 23:04:34.119550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-26 23:04:34.119576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-26 23:04:34.119714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-26 23:04:34.119739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-26 23:04:34.119930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-26 23:04:34.119956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-26 23:04:34.120129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-26 23:04:34.120155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-26 23:04:34.120327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-26 23:04:34.120353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-26 23:04:34.120502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-26 23:04:34.120527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-26 23:04:34.120699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-26 23:04:34.120724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-26 23:04:34.120919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-26 23:04:34.120944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-26 23:04:34.121092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-26 23:04:34.121119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-26 23:04:34.121284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-26 23:04:34.121310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-26 23:04:34.121483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-26 23:04:34.121510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-26 23:04:34.121656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-26 23:04:34.121682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-26 23:04:34.121846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-26 23:04:34.121872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-26 23:04:34.122040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-26 23:04:34.122071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-26 23:04:34.122243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-26 23:04:34.122269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-26 23:04:34.122423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-26 23:04:34.122448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-26 23:04:34.122616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-26 23:04:34.122642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-26 23:04:34.122815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-26 23:04:34.122841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-26 23:04:34.122984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-26 23:04:34.123009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-26 23:04:34.123185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-26 23:04:34.123212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-26 23:04:34.123377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-26 23:04:34.123403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-26 23:04:34.123545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-26 23:04:34.123571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-26 23:04:34.123744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-26 23:04:34.123770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-26 23:04:34.123972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-26 23:04:34.123998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-26 23:04:34.124153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-26 23:04:34.124180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-26 23:04:34.124346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-26 23:04:34.124373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-26 23:04:34.124513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-26 23:04:34.124540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-26 23:04:34.124736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-26 23:04:34.124762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-26 23:04:34.124934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-26 23:04:34.124960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-26 23:04:34.125109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-26 23:04:34.125136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-26 23:04:34.125313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-26 23:04:34.125339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-26 23:04:34.125514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-26 23:04:34.125544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-26 23:04:34.125684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-26 23:04:34.125710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-26 23:04:34.125850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-26 23:04:34.125876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-26 23:04:34.126046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-26 23:04:34.126077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-26 23:04:34.126249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-26 23:04:34.126274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-26 23:04:34.126447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-26 23:04:34.126472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-26 23:04:34.126613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-26 23:04:34.126639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-26 23:04:34.126836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-26 23:04:34.126861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-26 23:04:34.127034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-26 23:04:34.127069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-26 23:04:34.127215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-26 23:04:34.127241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-26 23:04:34.127381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-26 23:04:34.127407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-26 23:04:34.127603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-26 23:04:34.127629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-26 23:04:34.127773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-26 23:04:34.127800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-26 23:04:34.127994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-26 23:04:34.128020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-26 23:04:34.128187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-26 23:04:34.128214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-26 23:04:34.128379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-26 23:04:34.128404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-26 23:04:34.128573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-26 23:04:34.128599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-26 23:04:34.128743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-26 23:04:34.128769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-26 23:04:34.128946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-26 23:04:34.128971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-26 23:04:34.129147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-26 23:04:34.129174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-26 23:04:34.129371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-26 23:04:34.129397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-26 23:04:34.129539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-26 23:04:34.129564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-26 23:04:34.129718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-26 23:04:34.129744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-26 23:04:34.129911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-26 23:04:34.129937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-26 23:04:34.130083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-26 23:04:34.130109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-26 23:04:34.130286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-26 23:04:34.130312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-26 23:04:34.130462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-26 23:04:34.130489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-26 23:04:34.130687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-26 23:04:34.130713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-26 23:04:34.130881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-26 23:04:34.130907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-26 23:04:34.131083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-26 23:04:34.131109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-26 23:04:34.131251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-26 23:04:34.131278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-26 23:04:34.131438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-26 23:04:34.131463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-26 23:04:34.131644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-26 23:04:34.131669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-26 23:04:34.131841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-26 23:04:34.131866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-26 23:04:34.132066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-26 23:04:34.132092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-26 23:04:34.132231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-26 23:04:34.132257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-26 23:04:34.132430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-26 23:04:34.132457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-26 23:04:34.132627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-26 23:04:34.132653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-26 23:04:34.132820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-26 23:04:34.132845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-26 23:04:34.133042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-26 23:04:34.133075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-26 23:04:34.133243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-26 23:04:34.133273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-26 23:04:34.133444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-26 23:04:34.133469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-26 23:04:34.133610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-26 23:04:34.133636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-26 23:04:34.133778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-26 23:04:34.133804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-26 23:04:34.133980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-26 23:04:34.134010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-26 23:04:34.134183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-26 23:04:34.134210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-26 23:04:34.134378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-26 23:04:34.134404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-26 23:04:34.134578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-26 23:04:34.134605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-26 23:04:34.134745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-26 23:04:34.134772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-26 23:04:34.134914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-26 23:04:34.134941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-26 23:04:34.135137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-26 23:04:34.135164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-26 23:04:34.135333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-26 23:04:34.135359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-26 23:04:34.135505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-26 23:04:34.135531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-26 23:04:34.135727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-26 23:04:34.135753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-26 23:04:34.135898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-26 23:04:34.135924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-26 23:04:34.136070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.136096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-26 23:04:34.136237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.136264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-26 23:04:34.136439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.136465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-26 23:04:34.136614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.136641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-26 23:04:34.136809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.136836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-26 23:04:34.137011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.137037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-26 23:04:34.137190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.137217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-26 23:04:34.137390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.137417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-26 23:04:34.137614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.137640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-26 23:04:34.137830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.137855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-26 23:04:34.138050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.138081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-26 23:04:34.138231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.138257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-26 23:04:34.138437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.138463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-26 23:04:34.138612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.138638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-26 23:04:34.138812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.138839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-26 23:04:34.139019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.139044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-26 23:04:34.139198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.139226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-26 23:04:34.139398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.139424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-26 23:04:34.139566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.139592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-26 23:04:34.139786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.139811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-26 23:04:34.139958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.139983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-26 23:04:34.140124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.140150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-26 23:04:34.140298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.140324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-26 23:04:34.140484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.140511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-26 23:04:34.140676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.140703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-26 23:04:34.140842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.140872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-26 23:04:34.141013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.141039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-26 23:04:34.141191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.141217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-26 23:04:34.141413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.141439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-26 23:04:34.141610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.141636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-26 23:04:34.141775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.141801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-26 23:04:34.141975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.142001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-26 23:04:34.142144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.142170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-26 23:04:34.142345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.142370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-26 23:04:34.142518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.142544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-26 23:04:34.142690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.142716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-26 23:04:34.142854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.142879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-26 23:04:34.143041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-26 23:04:34.143072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-26 23:04:34.143240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-26 23:04:34.143266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-26 23:04:34.143436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-26 23:04:34.143461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-26 23:04:34.143631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-26 23:04:34.143657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-26 23:04:34.143824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-26 23:04:34.143851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-26 23:04:34.144013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-26 23:04:34.144039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-26 23:04:34.144200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-26 23:04:34.144227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-26 23:04:34.144398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-26 23:04:34.144424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-26 23:04:34.144563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-26 23:04:34.144590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-26 23:04:34.144775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-26 23:04:34.144800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-26 23:04:34.145009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-26 23:04:34.145035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-26 23:04:34.145218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-26 23:04:34.145245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-26 23:04:34.145417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-26 23:04:34.145443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-26 23:04:34.145593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-26 23:04:34.145619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-26 23:04:34.145757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-26 23:04:34.145782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-26 23:04:34.145962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-26 23:04:34.145988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-26 23:04:34.146183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-26 23:04:34.146210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-26 23:04:34.146249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:41.792 [2024-07-26 23:04:34.146374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-26 23:04:34.146400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-26 23:04:34.146537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-26 23:04:34.146563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-26 23:04:34.146705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-26 23:04:34.146731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-26 23:04:34.146926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-26 23:04:34.146951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-26 23:04:34.147093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-26 23:04:34.147119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-26 23:04:34.147287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-26 23:04:34.147313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-26 23:04:34.147465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-26 23:04:34.147492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-26 23:04:34.147654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-26 23:04:34.147680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-26 23:04:34.147950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-26 23:04:34.147976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-26 23:04:34.148213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-26 23:04:34.148239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-26 23:04:34.148380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-26 23:04:34.148406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-26 23:04:34.148559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-26 23:04:34.148585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-26 23:04:34.148824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-26 23:04:34.148851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-26 23:04:34.149083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-26 23:04:34.149110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-26 23:04:34.149310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-26 23:04:34.149335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.149503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-26 23:04:34.149528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.149701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-26 23:04:34.149727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.149902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-26 23:04:34.149927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.150078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-26 23:04:34.150104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.150302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-26 23:04:34.150327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.150473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-26 23:04:34.150500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.150677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-26 23:04:34.150703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.150844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-26 23:04:34.150871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.151082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-26 23:04:34.151109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.151375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-26 23:04:34.151406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.151667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-26 23:04:34.151693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.151893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-26 23:04:34.151918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.152177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-26 23:04:34.152203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.152360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-26 23:04:34.152387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.152574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-26 23:04:34.152600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.152743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-26 23:04:34.152769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.153009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-26 23:04:34.153035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.153246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-26 23:04:34.153272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.153417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-26 23:04:34.153443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.153618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-26 23:04:34.153643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.153830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-26 23:04:34.153856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.153994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-26 23:04:34.154026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.154207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-26 23:04:34.154233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.154410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-26 23:04:34.154437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.154607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-26 23:04:34.154634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.154836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-26 23:04:34.154861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.155008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-26 23:04:34.155035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.155225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-26 23:04:34.155252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.155456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-26 23:04:34.155482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.155630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-26 23:04:34.155656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.155836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-26 23:04:34.155862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.156066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-26 23:04:34.156093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.156296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-26 23:04:34.156322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.156469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-26 23:04:34.156497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.156665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-26 23:04:34.156691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.156864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-26 23:04:34.156889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.157047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-26 23:04:34.157079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-26 23:04:34.157259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.157286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-26 23:04:34.157458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.157484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-26 23:04:34.157686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.157712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-26 23:04:34.157854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.157881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-26 23:04:34.158049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.158083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-26 23:04:34.158238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.158264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-26 23:04:34.158415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.158442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-26 23:04:34.158616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.158642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-26 23:04:34.158819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.158845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-26 23:04:34.159108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.159135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-26 23:04:34.159272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.159299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-26 23:04:34.159450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.159476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-26 23:04:34.159654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.159685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-26 23:04:34.159831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.159857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-26 23:04:34.160067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.160094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-26 23:04:34.160247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.160273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-26 23:04:34.160472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.160498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-26 23:04:34.160672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.160698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-26 23:04:34.160847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.160873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-26 23:04:34.161040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.161070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-26 23:04:34.161242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.161268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-26 23:04:34.161439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.161466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-26 23:04:34.161606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.161632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-26 23:04:34.161803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.161829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-26 23:04:34.161995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.162021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-26 23:04:34.162191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.162218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-26 23:04:34.162367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.162394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-26 23:04:34.162590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.162617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-26 23:04:34.162818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.162844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-26 23:04:34.163009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.163036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-26 23:04:34.163220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.163259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-26 23:04:34.163474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.163515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-26 23:04:34.163714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.163742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-26 23:04:34.163913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.163940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-26 23:04:34.164113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.164141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-26 23:04:34.164341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.164368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-26 23:04:34.164565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.164591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-26 23:04:34.164785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-26 23:04:34.164813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.165012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-26 23:04:34.165038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.165211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-26 23:04:34.165252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.165436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-26 23:04:34.165464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.165605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-26 23:04:34.165631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.165828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-26 23:04:34.165854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.165997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-26 23:04:34.166023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.166229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-26 23:04:34.166256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.166411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-26 23:04:34.166438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.166637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-26 23:04:34.166663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.166861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-26 23:04:34.166886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.167046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-26 23:04:34.167095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.167285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-26 23:04:34.167311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.167510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-26 23:04:34.167536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.167709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-26 23:04:34.167734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.167903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-26 23:04:34.167934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.168133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-26 23:04:34.168160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.168329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-26 23:04:34.168355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.168527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-26 23:04:34.168553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.168694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-26 23:04:34.168720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.168887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-26 23:04:34.168913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.169086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-26 23:04:34.169113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.169278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-26 23:04:34.169304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.169453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-26 23:04:34.169494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.169701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-26 23:04:34.169727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.169895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-26 23:04:34.169921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.170120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-26 23:04:34.170147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.170301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-26 23:04:34.170328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.170500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-26 23:04:34.170526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.170674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-26 23:04:34.170700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.170940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-26 23:04:34.170966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.171113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-26 23:04:34.171140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.171316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-26 23:04:34.171343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.171513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-26 23:04:34.171539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.171680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-26 23:04:34.171705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.171901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-26 23:04:34.171927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.172073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-26 23:04:34.172100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.172274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-26 23:04:34.172299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-26 23:04:34.172467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-26 23:04:34.172493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-26 23:04:34.172645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-26 23:04:34.172673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-26 23:04:34.172848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-26 23:04:34.172875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-26 23:04:34.173036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-26 23:04:34.173067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-26 23:04:34.173252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-26 23:04:34.173286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-26 23:04:34.173466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-26 23:04:34.173496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-26 23:04:34.173663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-26 23:04:34.173692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-26 23:04:34.173881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-26 23:04:34.173908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-26 23:04:34.174052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-26 23:04:34.174084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-26 23:04:34.174256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-26 23:04:34.174283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-26 23:04:34.174479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-26 23:04:34.174506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-26 23:04:34.174708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-26 23:04:34.174735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-26 23:04:34.174906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-26 23:04:34.174934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-26 23:04:34.175087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-26 23:04:34.175115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-26 23:04:34.175286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-26 23:04:34.175313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-26 23:04:34.175515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-26 23:04:34.175541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-26 23:04:34.175677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-26 23:04:34.175703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-26 23:04:34.175907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-26 23:04:34.175938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-26 23:04:34.176081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-26 23:04:34.176108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-26 23:04:34.176243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-26 23:04:34.176270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-26 23:04:34.176427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-26 23:04:34.176453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-26 23:04:34.176609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-26 23:04:34.176637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-26 23:04:34.176808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-26 23:04:34.176835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-26 23:04:34.177034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-26 23:04:34.177087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-26 23:04:34.177228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-26 23:04:34.177255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-26 23:04:34.177424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-26 23:04:34.177451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-26 23:04:34.177646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-26 23:04:34.177673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-26 23:04:34.177843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-26 23:04:34.177869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-26 23:04:34.178041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-26 23:04:34.178073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-26 23:04:34.178247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-26 23:04:34.178273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-26 23:04:34.178475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-26 23:04:34.178501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-26 23:04:34.178687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-26 23:04:34.178714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-26 23:04:34.178909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.178937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-26 23:04:34.179109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.179136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-26 23:04:34.179308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.179334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-26 23:04:34.179499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.179525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-26 23:04:34.179679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.179707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-26 23:04:34.179856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.179883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-26 23:04:34.180057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.180091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-26 23:04:34.180261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.180288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-26 23:04:34.180435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.180462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-26 23:04:34.180635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.180661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-26 23:04:34.180829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.180855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-26 23:04:34.181022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.181049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-26 23:04:34.181289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.181329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-26 23:04:34.181511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.181538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-26 23:04:34.181711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.181737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-26 23:04:34.181910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.181936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-26 23:04:34.182116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.182144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-26 23:04:34.182311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.182337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-26 23:04:34.182504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.182530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-26 23:04:34.182668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.182693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-26 23:04:34.182863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.182889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-26 23:04:34.183106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.183146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-26 23:04:34.183330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.183357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-26 23:04:34.183503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.183531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-26 23:04:34.183701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.183727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-26 23:04:34.183872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.183899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-26 23:04:34.184072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.184099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-26 23:04:34.184253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.184279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-26 23:04:34.184447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.184473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-26 23:04:34.184615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.184641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-26 23:04:34.184817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.184843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-26 23:04:34.185016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.185042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-26 23:04:34.185231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.185258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-26 23:04:34.185441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.185468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-26 23:04:34.185644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.185670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-26 23:04:34.185838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.185864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-26 23:04:34.186037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.186073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-26 23:04:34.186224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-26 23:04:34.186250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.186432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-26 23:04:34.186458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.186647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-26 23:04:34.186673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.186847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-26 23:04:34.186873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.187015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-26 23:04:34.187041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.187220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-26 23:04:34.187245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.187440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-26 23:04:34.187467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.187645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-26 23:04:34.187673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.187818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-26 23:04:34.187846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.187994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-26 23:04:34.188020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.188197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-26 23:04:34.188224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.188371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-26 23:04:34.188397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.188567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-26 23:04:34.188594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.188744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-26 23:04:34.188786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.188976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-26 23:04:34.189002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.189177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-26 23:04:34.189209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.189376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-26 23:04:34.189403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.189576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-26 23:04:34.189602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.189751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-26 23:04:34.189779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.189954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-26 23:04:34.189981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.190122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-26 23:04:34.190149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.190318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-26 23:04:34.190344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.190514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-26 23:04:34.190540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.190715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-26 23:04:34.190742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.190936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-26 23:04:34.190967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.191170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-26 23:04:34.191197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.191371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-26 23:04:34.191397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.191597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-26 23:04:34.191623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.191794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-26 23:04:34.191821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.191995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-26 23:04:34.192021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.192195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-26 23:04:34.192223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.192392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-26 23:04:34.192418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.192611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-26 23:04:34.192637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.192803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-26 23:04:34.192829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.192974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-26 23:04:34.192999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.193175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-26 23:04:34.193201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.193384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-26 23:04:34.193411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.193589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-26 23:04:34.193616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-26 23:04:34.193763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-26 23:04:34.193789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-26 23:04:34.193960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-26 23:04:34.193987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-26 23:04:34.194159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-26 23:04:34.194186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-26 23:04:34.194367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-26 23:04:34.194393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-26 23:04:34.194571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-26 23:04:34.194597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-26 23:04:34.194764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-26 23:04:34.194790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-26 23:04:34.194957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-26 23:04:34.194984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-26 23:04:34.195155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-26 23:04:34.195182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-26 23:04:34.195330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-26 23:04:34.195356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-26 23:04:34.195523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-26 23:04:34.195549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-26 23:04:34.195702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-26 23:04:34.195728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-26 23:04:34.195900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-26 23:04:34.195926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-26 23:04:34.196069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-26 23:04:34.196096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-26 23:04:34.196268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-26 23:04:34.196294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-26 23:04:34.196445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-26 23:04:34.196471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-26 23:04:34.196680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-26 23:04:34.196706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-26 23:04:34.196848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-26 23:04:34.196874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-26 23:04:34.197048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-26 23:04:34.197088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-26 23:04:34.197242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-26 23:04:34.197268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-26 23:04:34.197416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-26 23:04:34.197441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-26 23:04:34.197609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-26 23:04:34.197635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-26 23:04:34.197834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-26 23:04:34.197860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-26 23:04:34.198003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-26 23:04:34.198030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-26 23:04:34.198228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-26 23:04:34.198255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-26 23:04:34.198400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-26 23:04:34.198426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-26 23:04:34.198597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-26 23:04:34.198623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-26 23:04:34.198802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-26 23:04:34.198829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-26 23:04:34.198997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-26 23:04:34.199023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-26 23:04:34.199209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-26 23:04:34.199236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-26 23:04:34.199430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-26 23:04:34.199456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-26 23:04:34.199601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-26 23:04:34.199627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-26 23:04:34.199780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-26 23:04:34.199808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-26 23:04:34.199962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-26 23:04:34.199989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-26 23:04:34.200165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-26 23:04:34.200192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-26 23:04:34.200391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-26 23:04:34.200418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-26 23:04:34.200589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-26 23:04:34.200617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-26 23:04:34.200791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-26 23:04:34.200817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-26 23:04:34.200988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-26 23:04:34.201014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-26 23:04:34.201196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-26 23:04:34.201223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-26 23:04:34.201387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-26 23:04:34.201414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-26 23:04:34.201559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-26 23:04:34.201586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-26 23:04:34.201761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-26 23:04:34.201787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-26 23:04:34.201985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-26 23:04:34.202011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-26 23:04:34.202183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-26 23:04:34.202209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-26 23:04:34.202414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-26 23:04:34.202441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-26 23:04:34.202612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-26 23:04:34.202639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-26 23:04:34.202813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-26 23:04:34.202838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-26 23:04:34.203019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-26 23:04:34.203045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-26 23:04:34.203254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-26 23:04:34.203282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-26 23:04:34.203483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-26 23:04:34.203509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-26 23:04:34.203682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-26 23:04:34.203708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-26 23:04:34.203887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-26 23:04:34.203913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-26 23:04:34.204093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-26 23:04:34.204120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-26 23:04:34.204268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-26 23:04:34.204295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-26 23:04:34.204468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-26 23:04:34.204494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-26 23:04:34.204669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-26 23:04:34.204696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-26 23:04:34.204893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-26 23:04:34.204919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-26 23:04:34.205089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-26 23:04:34.205120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-26 23:04:34.205290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-26 23:04:34.205317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-26 23:04:34.205496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-26 23:04:34.205521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-26 23:04:34.205664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-26 23:04:34.205691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-26 23:04:34.205890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-26 23:04:34.205917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-26 23:04:34.206096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-26 23:04:34.206122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-26 23:04:34.206319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-26 23:04:34.206346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-26 23:04:34.206486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-26 23:04:34.206512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-26 23:04:34.206655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-26 23:04:34.206682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-26 23:04:34.206851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-26 23:04:34.206877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-26 23:04:34.207048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-26 23:04:34.207079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-26 23:04:34.207264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-26 23:04:34.207291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-26 23:04:34.207431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-26 23:04:34.207458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-26 23:04:34.207606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-26 23:04:34.207632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-26 23:04:34.207838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-26 23:04:34.207865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-26 23:04:34.208004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-26 23:04:34.208030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-26 23:04:34.208184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-26 23:04:34.208211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-26 23:04:34.208353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-26 23:04:34.208379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-26 23:04:34.208520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-26 23:04:34.208546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-26 23:04:34.208694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-26 23:04:34.208720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-26 23:04:34.208869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-26 23:04:34.208895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-26 23:04:34.209069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-26 23:04:34.209096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-26 23:04:34.209291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-26 23:04:34.209318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-26 23:04:34.209471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-26 23:04:34.209500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-26 23:04:34.209672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-26 23:04:34.209698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-26 23:04:34.209894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-26 23:04:34.209920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-26 23:04:34.210071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-26 23:04:34.210099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-26 23:04:34.210273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-26 23:04:34.210299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-26 23:04:34.210468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-26 23:04:34.210495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-26 23:04:34.210689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-26 23:04:34.210716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-26 23:04:34.210913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-26 23:04:34.210938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-26 23:04:34.211104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-26 23:04:34.211131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-26 23:04:34.211285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-26 23:04:34.211311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-26 23:04:34.211452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-26 23:04:34.211478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-26 23:04:34.211649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-26 23:04:34.211676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-26 23:04:34.211826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-26 23:04:34.211851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-26 23:04:34.212022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-26 23:04:34.212048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-26 23:04:34.212231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-26 23:04:34.212257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-26 23:04:34.212403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-26 23:04:34.212430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-26 23:04:34.212596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-26 23:04:34.212622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-26 23:04:34.212790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-26 23:04:34.212820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-26 23:04:34.212993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-26 23:04:34.213020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-26 23:04:34.213168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-26 23:04:34.213196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-26 23:04:34.213347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-26 23:04:34.213373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-26 23:04:34.213543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-26 23:04:34.213569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-26 23:04:34.213712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-26 23:04:34.213737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-26 23:04:34.213914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-26 23:04:34.213941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-26 23:04:34.214109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-26 23:04:34.214136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-26 23:04:34.214310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.214337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.214507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.214533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.214709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.214736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.214906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.214931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.215101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.215127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.215300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.215328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.215499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.215526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.215697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.215723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.215892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.215917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.216065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.216093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.216239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.216266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.216418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.216445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.216613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.216639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.216787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.216813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.216981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.217008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.217191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.217218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.217416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.217442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.217608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.217634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.217803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.217829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.217979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.218004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.218170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.218197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.218364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.218391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.218542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.218567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.218725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.218751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.218920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.218946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.219120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.219147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.219323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.219349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.219498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.219524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.219719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.219744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.219894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.219919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.220068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.220095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.220248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.220275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.220451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.220486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.220655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.220681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.220851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.220877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.221043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.221075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.221246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.221272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.221443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.221469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.221640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.221667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.221842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-26 23:04:34.221868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-26 23:04:34.222007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.222034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.222232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.222268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.222468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.222497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.222692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.222722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.222885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.222912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.223068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.223095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.223275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.223303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.223450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.223478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.223686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.223712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.223909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.223936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.224083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.224110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.224249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.224276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.224421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.224448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.224617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.224644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.224817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.224843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.225015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.225041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.225225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.225251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.225420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.225445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.225590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.225616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.225768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.225799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.225991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.226020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.226217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.226246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.226462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.226491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.226666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.226692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.226870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.226896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.227035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.227067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.227241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.227268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.227473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.227499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.227670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.227696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.227870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.227896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.228038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.228071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.228242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.228268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.228430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.228460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.228627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.228653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.228804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.228830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.228973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.229001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.229174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.229200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.229368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.229394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.229541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.229568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.229738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.229765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-26 23:04:34.229915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-26 23:04:34.229942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-26 23:04:34.230109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-26 23:04:34.230135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-26 23:04:34.230302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-26 23:04:34.230328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-26 23:04:34.230505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-26 23:04:34.230531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-26 23:04:34.230672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-26 23:04:34.230699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-26 23:04:34.230874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-26 23:04:34.230900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-26 23:04:34.231050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-26 23:04:34.231083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-26 23:04:34.231229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-26 23:04:34.231255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-26 23:04:34.231451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-26 23:04:34.231477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-26 23:04:34.231646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-26 23:04:34.231672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-26 23:04:34.231814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-26 23:04:34.231841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-26 23:04:34.232020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-26 23:04:34.232046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-26 23:04:34.232238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-26 23:04:34.232264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-26 23:04:34.232431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-26 23:04:34.232458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-26 23:04:34.232602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-26 23:04:34.232628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-26 23:04:34.232807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-26 23:04:34.232833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-26 23:04:34.233032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-26 23:04:34.233065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-26 23:04:34.233217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-26 23:04:34.233243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-26 23:04:34.233387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-26 23:04:34.233413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-26 23:04:34.233595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-26 23:04:34.233621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-26 23:04:34.233790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-26 23:04:34.233816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-26 23:04:34.233985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-26 23:04:34.234011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-26 23:04:34.234187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-26 23:04:34.234214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-26 23:04:34.234414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-26 23:04:34.234440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-26 23:04:34.234612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-26 23:04:34.234638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-26 23:04:34.234833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-26 23:04:34.234860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-26 23:04:34.235042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-26 23:04:34.235074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-26 23:04:34.235218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-26 23:04:34.235244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-26 23:04:34.235439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-26 23:04:34.235465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-26 23:04:34.235603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-26 23:04:34.235630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-26 23:04:34.235779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-26 23:04:34.235806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-26 23:04:34.235969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-26 23:04:34.235996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-26 23:04:34.236184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-26 23:04:34.236215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-26 23:04:34.236352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-26 23:04:34.236378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-26 23:04:34.236517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-26 23:04:34.236545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-26 23:04:34.236695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-26 23:04:34.236730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-26 23:04:34.236874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-26 23:04:34.236901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-26 23:04:34.237043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-26 23:04:34.237082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-26 23:04:34.237218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-26 23:04:34.237244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-26 23:04:34.237407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-26 23:04:34.237433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-26 23:04:34.237574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-26 23:04:34.237600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-26 23:04:34.237649] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:42.080 [2024-07-26 23:04:34.237683] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:42.080 [2024-07-26 23:04:34.237698] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:42.080 [2024-07-26 23:04:34.237710] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:42.080 [2024-07-26 23:04:34.237721] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:42.080 [2024-07-26 23:04:34.237747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-26 23:04:34.237772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-26 23:04:34.237800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:42.080 [2024-07-26 23:04:34.237914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-26 23:04:34.237827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:42.080 [2024-07-26 23:04:34.237940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-26 23:04:34.237853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:34:42.080 [2024-07-26 23:04:34.237856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:42.080 [2024-07-26 23:04:34.238118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-26 23:04:34.238144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-26 23:04:34.238310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-26 23:04:34.238337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-26 23:04:34.238481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-26 23:04:34.238508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-26 23:04:34.238678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-26 23:04:34.238705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-26 23:04:34.238880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-26 23:04:34.238907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-26 23:04:34.239051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-26 23:04:34.239089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-26 23:04:34.239237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-26 23:04:34.239263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-26 23:04:34.239442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-26 23:04:34.239468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-26 23:04:34.239716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-26 23:04:34.239742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-26 23:04:34.239943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-26 23:04:34.239969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-26 23:04:34.240115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-26 23:04:34.240142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-26 23:04:34.240282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-26 23:04:34.240308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-26 23:04:34.240445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-26 23:04:34.240472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-26 23:04:34.240649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-26 23:04:34.240675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-26 23:04:34.240817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-26 23:04:34.240843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-26 23:04:34.240990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-26 23:04:34.241016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-26 23:04:34.241164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-26 23:04:34.241190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-26 23:04:34.241348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-26 23:04:34.241374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-26 23:04:34.241517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-26 23:04:34.241544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-26 23:04:34.241690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-26 23:04:34.241715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-26 23:04:34.241913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-26 23:04:34.241939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-26 23:04:34.242101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-26 23:04:34.242128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-26 23:04:34.242277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-26 23:04:34.242305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-26 23:04:34.242444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-26 23:04:34.242471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-26 23:04:34.242641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-26 23:04:34.242667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-26 23:04:34.242808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-26 23:04:34.242835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-26 23:04:34.242975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-26 23:04:34.243005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-26 23:04:34.243188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-26 23:04:34.243214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-26 23:04:34.243360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-26 23:04:34.243386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-26 23:04:34.243590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-26 23:04:34.243617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-26 23:04:34.243750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-26 23:04:34.243776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-26 23:04:34.243919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-26 23:04:34.243947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-26 23:04:34.244111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-26 23:04:34.244138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-26 23:04:34.244308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-26 23:04:34.244334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-26 23:04:34.244470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-26 23:04:34.244496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-26 23:04:34.244642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-26 23:04:34.244669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-26 23:04:34.244837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-26 23:04:34.244863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-26 23:04:34.245033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-26 23:04:34.245066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-26 23:04:34.245268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-26 23:04:34.245294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-26 23:04:34.245498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-26 23:04:34.245524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-26 23:04:34.245682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-26 23:04:34.245708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-26 23:04:34.245886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-26 23:04:34.245911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-26 23:04:34.246076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-26 23:04:34.246103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-26 23:04:34.246237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-26 23:04:34.246263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-26 23:04:34.246425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-26 23:04:34.246451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-26 23:04:34.246619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-26 23:04:34.246644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-26 23:04:34.246786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-26 23:04:34.246814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-26 23:04:34.246992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-26 23:04:34.247018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-26 23:04:34.247157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-26 23:04:34.247183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-26 23:04:34.247330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-26 23:04:34.247357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-26 23:04:34.247532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-26 23:04:34.247557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-26 23:04:34.247727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-26 23:04:34.247753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-26 23:04:34.247897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-26 23:04:34.247923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-26 23:04:34.248093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-26 23:04:34.248120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-26 23:04:34.248260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-26 23:04:34.248286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-26 23:04:34.248442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-26 23:04:34.248468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-26 23:04:34.248640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-26 23:04:34.248665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-26 23:04:34.248837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-26 23:04:34.248863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-26 23:04:34.249004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-26 23:04:34.249032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-26 23:04:34.249207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-26 23:04:34.249235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-26 23:04:34.249378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-26 23:04:34.249403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-26 23:04:34.249568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-26 23:04:34.249594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-26 23:04:34.249763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-26 23:04:34.249789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-26 23:04:34.249950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-26 23:04:34.249977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-26 23:04:34.250151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-26 23:04:34.250178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-26 23:04:34.250327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-26 23:04:34.250353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-26 23:04:34.250506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-26 23:04:34.250536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-26 23:04:34.250682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-26 23:04:34.250709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-26 23:04:34.250885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-26 23:04:34.250911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-26 23:04:34.251079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-26 23:04:34.251107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-26 23:04:34.251275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-26 23:04:34.251302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-26 23:04:34.251439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-26 23:04:34.251465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-26 23:04:34.251658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-26 23:04:34.251684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-26 23:04:34.251829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-26 23:04:34.251855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-26 23:04:34.252086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-26 23:04:34.252112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-26 23:04:34.252278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-26 23:04:34.252304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-26 23:04:34.252473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-26 23:04:34.252499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-26 23:04:34.252674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-26 23:04:34.252700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-26 23:04:34.252870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-26 23:04:34.252895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-26 23:04:34.253072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-26 23:04:34.253099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-26 23:04:34.253246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-26 23:04:34.253273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-26 23:04:34.253449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-26 23:04:34.253474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-26 23:04:34.253725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-26 23:04:34.253750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-26 23:04:34.253950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-26 23:04:34.253975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-26 23:04:34.254161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-26 23:04:34.254188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-26 23:04:34.254351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-26 23:04:34.254377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-26 23:04:34.254562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-26 23:04:34.254588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-26 23:04:34.254774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-26 23:04:34.254799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-26 23:04:34.254936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-26 23:04:34.254961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-26 23:04:34.255124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-26 23:04:34.255150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-26 23:04:34.255316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-26 23:04:34.255342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-26 23:04:34.255486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-26 23:04:34.255512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-26 23:04:34.255706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-26 23:04:34.255732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-26 23:04:34.255878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-26 23:04:34.255904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-26 23:04:34.256099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-26 23:04:34.256126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-26 23:04:34.256301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-26 23:04:34.256327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-26 23:04:34.256469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-26 23:04:34.256494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-26 23:04:34.256633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-26 23:04:34.256658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-26 23:04:34.256823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-26 23:04:34.256849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-26 23:04:34.257006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-26 23:04:34.257032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-26 23:04:34.257188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-26 23:04:34.257214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-26 23:04:34.257391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-26 23:04:34.257417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-26 23:04:34.257570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-26 23:04:34.257596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-26 23:04:34.257782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-26 23:04:34.257807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-26 23:04:34.257947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-26 23:04:34.257974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-26 23:04:34.258139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-26 23:04:34.258165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-26 23:04:34.258310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-26 23:04:34.258344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-26 23:04:34.258488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-26 23:04:34.258514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-26 23:04:34.258683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-26 23:04:34.258709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-26 23:04:34.258845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-26 23:04:34.258872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-26 23:04:34.259040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-26 23:04:34.259071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-26 23:04:34.259313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-26 23:04:34.259339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-26 23:04:34.259488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-26 23:04:34.259514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-26 23:04:34.259702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-26 23:04:34.259727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-26 23:04:34.259861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-26 23:04:34.259887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-26 23:04:34.260087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-26 23:04:34.260114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-26 23:04:34.260255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-26 23:04:34.260281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-26 23:04:34.260423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-26 23:04:34.260450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-26 23:04:34.260633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-26 23:04:34.260659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-26 23:04:34.260829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-26 23:04:34.260855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-26 23:04:34.261029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-26 23:04:34.261055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-26 23:04:34.261197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-26 23:04:34.261222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-26 23:04:34.261387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-26 23:04:34.261413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-26 23:04:34.261578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-26 23:04:34.261603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-26 23:04:34.261744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-26 23:04:34.261771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-26 23:04:34.261920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-26 23:04:34.261946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-26 23:04:34.262129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-26 23:04:34.262156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-26 23:04:34.262439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-26 23:04:34.262465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-26 23:04:34.262657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-26 23:04:34.262683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-26 23:04:34.262824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-26 23:04:34.262851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-26 23:04:34.262990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-26 23:04:34.263016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-26 23:04:34.263191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-26 23:04:34.263217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-26 23:04:34.263359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-26 23:04:34.263386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-26 23:04:34.263564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-26 23:04:34.263589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-26 23:04:34.263743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-26 23:04:34.263768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-26 23:04:34.263969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-26 23:04:34.263994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-26 23:04:34.264128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-26 23:04:34.264154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-26 23:04:34.264311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-26 23:04:34.264338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-26 23:04:34.264476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-26 23:04:34.264501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-26 23:04:34.264640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-26 23:04:34.264666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-26 23:04:34.264841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-26 23:04:34.264867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-26 23:04:34.265054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-26 23:04:34.265086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-26 23:04:34.265271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-26 23:04:34.265297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-26 23:04:34.265435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-26 23:04:34.265461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-26 23:04:34.265635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-26 23:04:34.265660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-26 23:04:34.265825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-26 23:04:34.265851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-26 23:04:34.266036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-26 23:04:34.266073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-26 23:04:34.266246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-26 23:04:34.266271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-26 23:04:34.266453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-26 23:04:34.266479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-26 23:04:34.266641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-26 23:04:34.266666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-26 23:04:34.266907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-26 23:04:34.266932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-26 23:04:34.267114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-26 23:04:34.267141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-26 23:04:34.267278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-26 23:04:34.267304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-26 23:04:34.267469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-26 23:04:34.267495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-26 23:04:34.267627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-26 23:04:34.267654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-26 23:04:34.267790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-26 23:04:34.267816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-26 23:04:34.267958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-26 23:04:34.267985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-26 23:04:34.268134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-26 23:04:34.268162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-26 23:04:34.268321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-26 23:04:34.268349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-26 23:04:34.268501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-26 23:04:34.268527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-26 23:04:34.268678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-26 23:04:34.268704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-26 23:04:34.268955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-26 23:04:34.268981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-26 23:04:34.269150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-26 23:04:34.269177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-26 23:04:34.269324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-26 23:04:34.269350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-26 23:04:34.269519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-26 23:04:34.269545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-26 23:04:34.269701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-26 23:04:34.269727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-26 23:04:34.269905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-26 23:04:34.269932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-26 23:04:34.270098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-26 23:04:34.270125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-26 23:04:34.270259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-26 23:04:34.270284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-26 23:04:34.270456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-26 23:04:34.270481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-26 23:04:34.270625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-26 23:04:34.270651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-26 23:04:34.270787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-26 23:04:34.270812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-26 23:04:34.270959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-26 23:04:34.270984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-26 23:04:34.271132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-26 23:04:34.271158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-26 23:04:34.271361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-26 23:04:34.271387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-26 23:04:34.271534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-26 23:04:34.271560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-26 23:04:34.271724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-26 23:04:34.271750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-26 23:04:34.271900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-26 23:04:34.271927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-26 23:04:34.272160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-26 23:04:34.272187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-26 23:04:34.272332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-26 23:04:34.272358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-26 23:04:34.272542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-26 23:04:34.272568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-26 23:04:34.272797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-26 23:04:34.272823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-26 23:04:34.272992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-26 23:04:34.273018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-26 23:04:34.273170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-26 23:04:34.273197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-26 23:04:34.273337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-26 23:04:34.273364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-26 23:04:34.273565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-26 23:04:34.273591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-26 23:04:34.273728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-26 23:04:34.273757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-26 23:04:34.274008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-26 23:04:34.274034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-26 23:04:34.274292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-26 23:04:34.274319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-26 23:04:34.274484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-26 23:04:34.274510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-26 23:04:34.274681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-26 23:04:34.274706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-26 23:04:34.274843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-26 23:04:34.274868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-26 23:04:34.275010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-26 23:04:34.275036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-26 23:04:34.275245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-26 23:04:34.275272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-26 23:04:34.275410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-26 23:04:34.275436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-26 23:04:34.275608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-26 23:04:34.275633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-26 23:04:34.275794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-26 23:04:34.275820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-26 23:04:34.275995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-26 23:04:34.276021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-26 23:04:34.276217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-26 23:04:34.276244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-26 23:04:34.276377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-26 23:04:34.276403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-26 23:04:34.276563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-26 23:04:34.276589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-26 23:04:34.276767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-26 23:04:34.276794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-26 23:04:34.276946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-26 23:04:34.276972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-26 23:04:34.277141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-26 23:04:34.277167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-26 23:04:34.277315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-26 23:04:34.277341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-26 23:04:34.277508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-26 23:04:34.277534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-26 23:04:34.277698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-26 23:04:34.277724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-26 23:04:34.277867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-26 23:04:34.277893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-26 23:04:34.278083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-26 23:04:34.278110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-26 23:04:34.278284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-26 23:04:34.278310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-26 23:04:34.278475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-26 23:04:34.278500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-26 23:04:34.278653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-26 23:04:34.278680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-26 23:04:34.278819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-26 23:04:34.278845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-26 23:04:34.279102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-26 23:04:34.279129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-26 23:04:34.279298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-26 23:04:34.279324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-26 23:04:34.279476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-26 23:04:34.279501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-26 23:04:34.279644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-26 23:04:34.279669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-26 23:04:34.279851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-26 23:04:34.279877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-26 23:04:34.280033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-26 23:04:34.280072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-26 23:04:34.280262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-26 23:04:34.280288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-26 23:04:34.280437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-26 23:04:34.280462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-26 23:04:34.280629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-26 23:04:34.280655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-26 23:04:34.280791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-26 23:04:34.280817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-26 23:04:34.280991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-26 23:04:34.281016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-26 23:04:34.281165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-26 23:04:34.281191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-26 23:04:34.281337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-26 23:04:34.281363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-26 23:04:34.281527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-26 23:04:34.281557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-26 23:04:34.281717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-26 23:04:34.281742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-26 23:04:34.281909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-26 23:04:34.281935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-26 23:04:34.282111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-26 23:04:34.282137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-26 23:04:34.282279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-26 23:04:34.282305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-26 23:04:34.282488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-26 23:04:34.282515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-26 23:04:34.282655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-26 23:04:34.282681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-26 23:04:34.282852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-26 23:04:34.282877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-26 23:04:34.283075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-26 23:04:34.283101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-26 23:04:34.283304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-26 23:04:34.283329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-26 23:04:34.283470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-26 23:04:34.283497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-26 23:04:34.283692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-26 23:04:34.283718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-26 23:04:34.283868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-26 23:04:34.283893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-26 23:04:34.284073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-26 23:04:34.284100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-26 23:04:34.284266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-26 23:04:34.284292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-26 23:04:34.284432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-26 23:04:34.284459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-26 23:04:34.284630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-26 23:04:34.284655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-26 23:04:34.284796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-26 23:04:34.284822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-26 23:04:34.285075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-26 23:04:34.285101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-26 23:04:34.285249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-26 23:04:34.285274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-26 23:04:34.285411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-26 23:04:34.285438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-26 23:04:34.285621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-26 23:04:34.285647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-26 23:04:34.285817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-26 23:04:34.285842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-26 23:04:34.285977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-26 23:04:34.286003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-26 23:04:34.286209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-26 23:04:34.286235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-26 23:04:34.286385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-26 23:04:34.286412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-26 23:04:34.286551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-26 23:04:34.286577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-26 23:04:34.286722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-26 23:04:34.286749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-26 23:04:34.286896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-26 23:04:34.286924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-26 23:04:34.287091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-26 23:04:34.287118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-26 23:04:34.287302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-26 23:04:34.287328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-26 23:04:34.287511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-26 23:04:34.287537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-26 23:04:34.287707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-26 23:04:34.287734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-26 23:04:34.288000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-26 23:04:34.288026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-26 23:04:34.288282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-26 23:04:34.288308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-26 23:04:34.288466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-26 23:04:34.288492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-26 23:04:34.288630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-26 23:04:34.288656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-26 23:04:34.288822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-26 23:04:34.288848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-26 23:04:34.289019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-26 23:04:34.289045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-26 23:04:34.289228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-26 23:04:34.289254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-26 23:04:34.289394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-26 23:04:34.289424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-26 23:04:34.289599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-26 23:04:34.289625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-26 23:04:34.289784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-26 23:04:34.289810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-26 23:04:34.289977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-26 23:04:34.290003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-26 23:04:34.290174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-26 23:04:34.290200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-26 23:04:34.290344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-26 23:04:34.290369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-26 23:04:34.290544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-26 23:04:34.290570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-26 23:04:34.290701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-26 23:04:34.290727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-26 23:04:34.290869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-26 23:04:34.290895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-26 23:04:34.291037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-26 23:04:34.291068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-26 23:04:34.291241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-26 23:04:34.291267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-26 23:04:34.291419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-26 23:04:34.291444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-26 23:04:34.291601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-26 23:04:34.291627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-26 23:04:34.291798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-26 23:04:34.291824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-26 23:04:34.291983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-26 23:04:34.292008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-26 23:04:34.292182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-26 23:04:34.292208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-26 23:04:34.292347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-26 23:04:34.292374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-26 23:04:34.292533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-26 23:04:34.292559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-26 23:04:34.292696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-26 23:04:34.292722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-26 23:04:34.292888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-26 23:04:34.292914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-26 23:04:34.293054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-26 23:04:34.293086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-26 23:04:34.293226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-26 23:04:34.293253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-26 23:04:34.293454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-26 23:04:34.293480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-26 23:04:34.293658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-26 23:04:34.293685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-26 23:04:34.293822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-26 23:04:34.293848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-26 23:04:34.294101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-26 23:04:34.294128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-26 23:04:34.294374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-26 23:04:34.294400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-26 23:04:34.294573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-26 23:04:34.294599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-26 23:04:34.294798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-26 23:04:34.294823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-26 23:04:34.295000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-26 23:04:34.295026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-26 23:04:34.295203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-26 23:04:34.295229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-26 23:04:34.295372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-26 23:04:34.295398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-26 23:04:34.295635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-26 23:04:34.295661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-26 23:04:34.295852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-26 23:04:34.295878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-26 23:04:34.296029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-26 23:04:34.296055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-26 23:04:34.296248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-26 23:04:34.296273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-26 23:04:34.296412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-26 23:04:34.296438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-26 23:04:34.296583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-26 23:04:34.296610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-26 23:04:34.296778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-26 23:04:34.296804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-26 23:04:34.296976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-26 23:04:34.297002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-26 23:04:34.297159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-26 23:04:34.297192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-26 23:04:34.297393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-26 23:04:34.297418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-26 23:04:34.297579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-26 23:04:34.297605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-26 23:04:34.297786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-26 23:04:34.297812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-26 23:04:34.297954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-26 23:04:34.297980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-26 23:04:34.298119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-26 23:04:34.298145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-26 23:04:34.298291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-26 23:04:34.298317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-26 23:04:34.298481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-26 23:04:34.298507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-26 23:04:34.298677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-26 23:04:34.298703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-26 23:04:34.298870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-26 23:04:34.298895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-26 23:04:34.299046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-26 23:04:34.299078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-26 23:04:34.299258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-26 23:04:34.299284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-26 23:04:34.299451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-26 23:04:34.299476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-26 23:04:34.299646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-26 23:04:34.299672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-26 23:04:34.299844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-26 23:04:34.299870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-26 23:04:34.300034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-26 23:04:34.300079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-26 23:04:34.300232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-26 23:04:34.300259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-26 23:04:34.300433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-26 23:04:34.300459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-26 23:04:34.300626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-26 23:04:34.300651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-26 23:04:34.300795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-26 23:04:34.300821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-26 23:04:34.300984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-26 23:04:34.301010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-26 23:04:34.301148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-26 23:04:34.301174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-26 23:04:34.301350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-26 23:04:34.301376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-26 23:04:34.301546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-26 23:04:34.301572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-26 23:04:34.301722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-26 23:04:34.301747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-26 23:04:34.301918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-26 23:04:34.301944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-26 23:04:34.302094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-26 23:04:34.302120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-26 23:04:34.302282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-26 23:04:34.302308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-26 23:04:34.302477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-26 23:04:34.302502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-26 23:04:34.302641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-26 23:04:34.302668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-26 23:04:34.302837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-26 23:04:34.302862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-26 23:04:34.303024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-26 23:04:34.303049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-26 23:04:34.303220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-26 23:04:34.303247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-26 23:04:34.303392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-26 23:04:34.303419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-26 23:04:34.303579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-26 23:04:34.303605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-26 23:04:34.303741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-26 23:04:34.303766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-26 23:04:34.303918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-26 23:04:34.303945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-26 23:04:34.304124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-26 23:04:34.304151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-26 23:04:34.304334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-26 23:04:34.304359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-26 23:04:34.304499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-26 23:04:34.304526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-26 23:04:34.304693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-26 23:04:34.304723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-26 23:04:34.304890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-26 23:04:34.304916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-26 23:04:34.305088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-26 23:04:34.305114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-26 23:04:34.305268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-26 23:04:34.305294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-26 23:04:34.305437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-26 23:04:34.305463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-26 23:04:34.305626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-26 23:04:34.305652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-26 23:04:34.305854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-26 23:04:34.305879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-26 23:04:34.306048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-26 23:04:34.306082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-26 23:04:34.306234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-26 23:04:34.306260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-26 23:04:34.306402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-26 23:04:34.306429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-26 23:04:34.306601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-26 23:04:34.306627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-26 23:04:34.306765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-26 23:04:34.306792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-26 23:04:34.306939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-26 23:04:34.306965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-26 23:04:34.307124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-26 23:04:34.307174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-26 23:04:34.307333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-26 23:04:34.307359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-26 23:04:34.307532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-26 23:04:34.307558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-26 23:04:34.307735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-26 23:04:34.307761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-26 23:04:34.307955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-26 23:04:34.307981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-26 23:04:34.308150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-26 23:04:34.308176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-26 23:04:34.308345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-26 23:04:34.308370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-26 23:04:34.308536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-26 23:04:34.308561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-26 23:04:34.308713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-26 23:04:34.308740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-26 23:04:34.308906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-26 23:04:34.308932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-26 23:04:34.309077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-26 23:04:34.309104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-26 23:04:34.309349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-26 23:04:34.309374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-26 23:04:34.309529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-26 23:04:34.309554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-26 23:04:34.309731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-26 23:04:34.309757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-26 23:04:34.309935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-26 23:04:34.309962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-26 23:04:34.310113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-26 23:04:34.310140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-26 23:04:34.310333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-26 23:04:34.310359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-26 23:04:34.310523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-26 23:04:34.310548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-26 23:04:34.310707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-26 23:04:34.310733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-26 23:04:34.310900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-26 23:04:34.310926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-26 23:04:34.311154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-26 23:04:34.311181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-26 23:04:34.311365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-26 23:04:34.311391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-26 23:04:34.311530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-26 23:04:34.311556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-26 23:04:34.311717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-26 23:04:34.311743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-26 23:04:34.311918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-26 23:04:34.311944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-26 23:04:34.312107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-26 23:04:34.312133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-26 23:04:34.312274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-26 23:04:34.312300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-26 23:04:34.312473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-26 23:04:34.312503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-26 23:04:34.312658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-26 23:04:34.312684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-26 23:04:34.312857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-26 23:04:34.312883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-26 23:04:34.313038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-26 23:04:34.313070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-26 23:04:34.313273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-26 23:04:34.313299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-26 23:04:34.313449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-26 23:04:34.313475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-26 23:04:34.313723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-26 23:04:34.313748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-26 23:04:34.313915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-26 23:04:34.313941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-26 23:04:34.314104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-26 23:04:34.314130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-26 23:04:34.314300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-26 23:04:34.314325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-26 23:04:34.314468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-26 23:04:34.314493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-26 23:04:34.314632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-26 23:04:34.314659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-26 23:04:34.314819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-26 23:04:34.314845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-26 23:04:34.315024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-26 23:04:34.315050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-26 23:04:34.315313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-26 23:04:34.315340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-26 23:04:34.315511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-26 23:04:34.315537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-26 23:04:34.315711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-26 23:04:34.315737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-26 23:04:34.315872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-26 23:04:34.315898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-26 23:04:34.316070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-26 23:04:34.316096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-26 23:04:34.316290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-26 23:04:34.316316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-26 23:04:34.316458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-26 23:04:34.316485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-26 23:04:34.316654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-26 23:04:34.316680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-26 23:04:34.316845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-26 23:04:34.316871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-26 23:04:34.317019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-26 23:04:34.317053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-26 23:04:34.317234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-26 23:04:34.317260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-26 23:04:34.317392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-26 23:04:34.317418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-26 23:04:34.317549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-26 23:04:34.317575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-26 23:04:34.317768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-26 23:04:34.317806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-26 23:04:34.317978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-26 23:04:34.318009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-26 23:04:34.318182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-26 23:04:34.318213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-26 23:04:34.318403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-26 23:04:34.318430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-26 23:04:34.318681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-26 23:04:34.318707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-26 23:04:34.318957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-26 23:04:34.318982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-26 23:04:34.319154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-26 23:04:34.319181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-26 23:04:34.319348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-26 23:04:34.319374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-26 23:04:34.319541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-26 23:04:34.319567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-26 23:04:34.319817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-26 23:04:34.319843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-26 23:04:34.320041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-26 23:04:34.320084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-26 23:04:34.320224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-26 23:04:34.320250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-26 23:04:34.320386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-26 23:04:34.320412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-26 23:04:34.320562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-26 23:04:34.320591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-26 23:04:34.320726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-26 23:04:34.320752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-26 23:04:34.320887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-26 23:04:34.320913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-26 23:04:34.321079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-26 23:04:34.321105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-26 23:04:34.321306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-26 23:04:34.321332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-26 23:04:34.321560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-26 23:04:34.321586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-26 23:04:34.321754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-26 23:04:34.321781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-26 23:04:34.321933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-26 23:04:34.321958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-26 23:04:34.322113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-26 23:04:34.322140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-26 23:04:34.322304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-26 23:04:34.322330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-26 23:04:34.322580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-26 23:04:34.322606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-26 23:04:34.322776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-26 23:04:34.322802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-26 23:04:34.322969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-26 23:04:34.322995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-26 23:04:34.323153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-26 23:04:34.323180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-26 23:04:34.323324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.323350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-26 23:04:34.323539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.323565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-26 23:04:34.323718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.323743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-26 23:04:34.323902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.323928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-26 23:04:34.324078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.324104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-26 23:04:34.324249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.324276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-26 23:04:34.324417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.324443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-26 23:04:34.324608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.324634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-26 23:04:34.324766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.324793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-26 23:04:34.324930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.324957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-26 23:04:34.325102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.325128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-26 23:04:34.325283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.325309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-26 23:04:34.325507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.325533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-26 23:04:34.325708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.325738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-26 23:04:34.325920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.325947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-26 23:04:34.326114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.326140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-26 23:04:34.326306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.326332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-26 23:04:34.326487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.326515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-26 23:04:34.326766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.326792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-26 23:04:34.326972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.326997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-26 23:04:34.327147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.327174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-26 23:04:34.327315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.327347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-26 23:04:34.327524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.327550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-26 23:04:34.327713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.327738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-26 23:04:34.327888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.327913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-26 23:04:34.328056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.328101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-26 23:04:34.328265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.328291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-26 23:04:34.328472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.328498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-26 23:04:34.328668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.328694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-26 23:04:34.328834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.328860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-26 23:04:34.328999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.329025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-26 23:04:34.329184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.329210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-26 23:04:34.329354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.329381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-26 23:04:34.329545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.329570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-26 23:04:34.329765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.329791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-26 23:04:34.329938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.329963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-26 23:04:34.330109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.330135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-26 23:04:34.330275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-26 23:04:34.330301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-26 23:04:34.330499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-26 23:04:34.330525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-26 23:04:34.330666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-26 23:04:34.330692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-26 23:04:34.330860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-26 23:04:34.330886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-26 23:04:34.331042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-26 23:04:34.331073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-26 23:04:34.331215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-26 23:04:34.331242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-26 23:04:34.331422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-26 23:04:34.331448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-26 23:04:34.331604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-26 23:04:34.331630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-26 23:04:34.331766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-26 23:04:34.331792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-26 23:04:34.331958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-26 23:04:34.331984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-26 23:04:34.332139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-26 23:04:34.332165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-26 23:04:34.332418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-26 23:04:34.332443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-26 23:04:34.332588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-26 23:04:34.332614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-26 23:04:34.332776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-26 23:04:34.332801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-26 23:04:34.332969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-26 23:04:34.332995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-26 23:04:34.333150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-26 23:04:34.333176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-26 23:04:34.333309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-26 23:04:34.333339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-26 23:04:34.333590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-26 23:04:34.333616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-26 23:04:34.333763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-26 23:04:34.333788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-26 23:04:34.333971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-26 23:04:34.333996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-26 23:04:34.334137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-26 23:04:34.334163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-26 23:04:34.334373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-26 23:04:34.334399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-26 23:04:34.334540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-26 23:04:34.334565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-26 23:04:34.334720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-26 23:04:34.334746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-26 23:04:34.334944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-26 23:04:34.334970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-26 23:04:34.335218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-26 23:04:34.335244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-26 23:04:34.335422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-26 23:04:34.335448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-26 23:04:34.335581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-26 23:04:34.335607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-26 23:04:34.335781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-26 23:04:34.335808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-26 23:04:34.335989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-26 23:04:34.336014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-26 23:04:34.336202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-26 23:04:34.336228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-26 23:04:34.336380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-26 23:04:34.336406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-26 23:04:34.336588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-26 23:04:34.336613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-26 23:04:34.336777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-26 23:04:34.336802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-26 23:04:34.336998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-26 23:04:34.337023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-26 23:04:34.337206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-26 23:04:34.337233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-26 23:04:34.337406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-26 23:04:34.337431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-26 23:04:34.337679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-26 23:04:34.337705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-26 23:04:34.337944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-26 23:04:34.337970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-26 23:04:34.338108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-26 23:04:34.338135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-26 23:04:34.338298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-26 23:04:34.338324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-26 23:04:34.338494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-26 23:04:34.338519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-26 23:04:34.338696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-26 23:04:34.338723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-26 23:04:34.338910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-26 23:04:34.338936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-26 23:04:34.339106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-26 23:04:34.339132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-26 23:04:34.339304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-26 23:04:34.339329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-26 23:04:34.339467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-26 23:04:34.339493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-26 23:04:34.339635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-26 23:04:34.339661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-26 23:04:34.339857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-26 23:04:34.339882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-26 23:04:34.340023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-26 23:04:34.340049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-26 23:04:34.340305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-26 23:04:34.340331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-26 23:04:34.340529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-26 23:04:34.340554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-26 23:04:34.340700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-26 23:04:34.340726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-26 23:04:34.340921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-26 23:04:34.340946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-26 23:04:34.341196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-26 23:04:34.341222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-26 23:04:34.341366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-26 23:04:34.341391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-26 23:04:34.341530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-26 23:04:34.341561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-26 23:04:34.341767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-26 23:04:34.341792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-26 23:04:34.341936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-26 23:04:34.341963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-26 23:04:34.342134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-26 23:04:34.342162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-26 23:04:34.342353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-26 23:04:34.342378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-26 23:04:34.342573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-26 23:04:34.342599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-26 23:04:34.342797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-26 23:04:34.342823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-26 23:04:34.342966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-26 23:04:34.342992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-26 23:04:34.343169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-26 23:04:34.343195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-26 23:04:34.343370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-26 23:04:34.343396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-26 23:04:34.343566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-26 23:04:34.343592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-26 23:04:34.343768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-26 23:04:34.343793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-26 23:04:34.343934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-26 23:04:34.343960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-26 23:04:34.344125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-26 23:04:34.344151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-26 23:04:34.344315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-26 23:04:34.344340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-26 23:04:34.344472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-26 23:04:34.344497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-26 23:04:34.344634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-26 23:04:34.344660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-26 23:04:34.344846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-26 23:04:34.344872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-26 23:04:34.345040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-26 23:04:34.345073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-26 23:04:34.345257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-26 23:04:34.345283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-26 23:04:34.345463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-26 23:04:34.345489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-26 23:04:34.345673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-26 23:04:34.345699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-26 23:04:34.345845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-26 23:04:34.345871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-26 23:04:34.346070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-26 23:04:34.346096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-26 23:04:34.346234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-26 23:04:34.346261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-26 23:04:34.346469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-26 23:04:34.346495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-26 23:04:34.346641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-26 23:04:34.346667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-26 23:04:34.346920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-26 23:04:34.346946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-26 23:04:34.347158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-26 23:04:34.347184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-26 23:04:34.347327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-26 23:04:34.347353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-26 23:04:34.347498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-26 23:04:34.347523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-26 23:04:34.347703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-26 23:04:34.347728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-26 23:04:34.347893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-26 23:04:34.347918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-26 23:04:34.348076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-26 23:04:34.348102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-26 23:04:34.348241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-26 23:04:34.348268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-26 23:04:34.348437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-26 23:04:34.348463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-26 23:04:34.348630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-26 23:04:34.348656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-26 23:04:34.348816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-26 23:04:34.348842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-26 23:04:34.348985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-26 23:04:34.349011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-26 23:04:34.349156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-26 23:04:34.349182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-26 23:04:34.349322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-26 23:04:34.349360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-26 23:04:34.349560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-26 23:04:34.349585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-26 23:04:34.349752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-26 23:04:34.349778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-26 23:04:34.349937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-26 23:04:34.349963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-26 23:04:34.350134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-26 23:04:34.350161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-26 23:04:34.350331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-26 23:04:34.350356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-26 23:04:34.350500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-26 23:04:34.350527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-26 23:04:34.350694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-26 23:04:34.350720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-26 23:04:34.350871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-26 23:04:34.350897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-26 23:04:34.351070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-26 23:04:34.351096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-26 23:04:34.351257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-26 23:04:34.351282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-26 23:04:34.351462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-26 23:04:34.351489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-26 23:04:34.351665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-26 23:04:34.351692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-26 23:04:34.351944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-26 23:04:34.351970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-26 23:04:34.352154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-26 23:04:34.352182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-26 23:04:34.352353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-26 23:04:34.352380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-26 23:04:34.352523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-26 23:04:34.352550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-26 23:04:34.352689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-26 23:04:34.352715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-26 23:04:34.352873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-26 23:04:34.352899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-26 23:04:34.353049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-26 23:04:34.353081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-26 23:04:34.353252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-26 23:04:34.353277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-26 23:04:34.353433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-26 23:04:34.353459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-26 23:04:34.353647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-26 23:04:34.353672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-26 23:04:34.353810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-26 23:04:34.353836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-26 23:04:34.353998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-26 23:04:34.354023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-26 23:04:34.354277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-26 23:04:34.354303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-26 23:04:34.354549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-26 23:04:34.354574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-26 23:04:34.354732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-26 23:04:34.354757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-26 23:04:34.354942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-26 23:04:34.354968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-26 23:04:34.355137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-26 23:04:34.355164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-26 23:04:34.355305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-26 23:04:34.355332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-26 23:04:34.355581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-26 23:04:34.355607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-26 23:04:34.355742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-26 23:04:34.355768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-26 23:04:34.355928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-26 23:04:34.355954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-26 23:04:34.356205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-26 23:04:34.356232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-26 23:04:34.356394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-26 23:04:34.356419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-26 23:04:34.356593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-26 23:04:34.356620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-26 23:04:34.356775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-26 23:04:34.356802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-26 23:04:34.356962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-26 23:04:34.356988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-26 23:04:34.357132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-26 23:04:34.357158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-26 23:04:34.357315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-26 23:04:34.357345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-26 23:04:34.357482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-26 23:04:34.357508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-26 23:04:34.357686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-26 23:04:34.357712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-26 23:04:34.357854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-26 23:04:34.357881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-26 23:04:34.358029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-26 23:04:34.358055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-26 23:04:34.358243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-26 23:04:34.358269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-26 23:04:34.358411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-26 23:04:34.358438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-26 23:04:34.358578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-26 23:04:34.358604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-26 23:04:34.358744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-26 23:04:34.358771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-26 23:04:34.358938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-26 23:04:34.358964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-26 23:04:34.359128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-26 23:04:34.359154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-26 23:04:34.359340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-26 23:04:34.359366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-26 23:04:34.359522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-26 23:04:34.359547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-26 23:04:34.359689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-26 23:04:34.359715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-26 23:04:34.359902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-26 23:04:34.359928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-26 23:04:34.360089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-26 23:04:34.360116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-26 23:04:34.360315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-26 23:04:34.360341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-26 23:04:34.360495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-26 23:04:34.360521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-26 23:04:34.360670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-26 23:04:34.360697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-26 23:04:34.360863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-26 23:04:34.360889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-26 23:04:34.361042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-26 23:04:34.361073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-26 23:04:34.361259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-26 23:04:34.361285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-26 23:04:34.361427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-26 23:04:34.361453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-26 23:04:34.361623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-26 23:04:34.361649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-26 23:04:34.361790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-26 23:04:34.361816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-26 23:04:34.361956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-26 23:04:34.361982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-26 23:04:34.362157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-26 23:04:34.362183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-26 23:04:34.362330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-26 23:04:34.362356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-26 23:04:34.362605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-26 23:04:34.362630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-26 23:04:34.362760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-26 23:04:34.362786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-26 23:04:34.362956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-26 23:04:34.362982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-26 23:04:34.363154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-26 23:04:34.363181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-26 23:04:34.363338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-26 23:04:34.363372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-26 23:04:34.363530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-26 23:04:34.363555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-26 23:04:34.363700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-26 23:04:34.363726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-26 23:04:34.363925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-26 23:04:34.363952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-26 23:04:34.364107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-26 23:04:34.364135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-26 23:04:34.364317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-26 23:04:34.364344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-26 23:04:34.364482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-26 23:04:34.364508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-26 23:04:34.364758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-26 23:04:34.364783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-26 23:04:34.364960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-26 23:04:34.364992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-26 23:04:34.365168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-26 23:04:34.365195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-26 23:04:34.365330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-26 23:04:34.365356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-26 23:04:34.365522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-26 23:04:34.365548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-26 23:04:34.365696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-26 23:04:34.365722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-26 23:04:34.365882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-26 23:04:34.365908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-26 23:04:34.366074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-26 23:04:34.366101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-26 23:04:34.366271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-26 23:04:34.366298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-26 23:04:34.366449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-26 23:04:34.366475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-26 23:04:34.366649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-26 23:04:34.366675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-26 23:04:34.366814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-26 23:04:34.366840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-26 23:04:34.367014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-26 23:04:34.367040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-26 23:04:34.367202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-26 23:04:34.367228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-26 23:04:34.367411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-26 23:04:34.367437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-26 23:04:34.367613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-26 23:04:34.367640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-26 23:04:34.367794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-26 23:04:34.367819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-26 23:04:34.368016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-26 23:04:34.368052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-26 23:04:34.368232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-26 23:04:34.368259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-26 23:04:34.368403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-26 23:04:34.368430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-26 23:04:34.368603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-26 23:04:34.368631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-26 23:04:34.368792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-26 23:04:34.368818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-26 23:04:34.368958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-26 23:04:34.368984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-26 23:04:34.369141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-26 23:04:34.369167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-26 23:04:34.369339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-26 23:04:34.369365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-26 23:04:34.369501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-26 23:04:34.369527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-26 23:04:34.369691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-26 23:04:34.369717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-26 23:04:34.369875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-26 23:04:34.369901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-26 23:04:34.370109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-26 23:04:34.370135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-26 23:04:34.370292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-26 23:04:34.370319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-26 23:04:34.370526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-26 23:04:34.370553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-26 23:04:34.370728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-26 23:04:34.370754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-26 23:04:34.370902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-26 23:04:34.370928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-26 23:04:34.371099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-26 23:04:34.371126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-26 23:04:34.371264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-26 23:04:34.371290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-26 23:04:34.371469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:42.099 [2024-07-26 23:04:34.371495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.100 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:34:42.100 [2024-07-26 23:04:34.371642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-26 23:04:34.371668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:42.100 [2024-07-26 23:04:34.371810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:42.100 [2024-07-26 23:04:34.371838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:42.100 [2024-07-26 23:04:34.372024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-26 23:04:34.372051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-26 23:04:34.372215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-26 23:04:34.372246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-26 23:04:34.372411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-26 23:04:34.372437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-26 23:04:34.372600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-26 23:04:34.372625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-26 23:04:34.372788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-26 23:04:34.372815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-26 23:04:34.372955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-26 23:04:34.372981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-26 23:04:34.373144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-26 23:04:34.373172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-26 23:04:34.373430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-26 23:04:34.373456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-26 23:04:34.373657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-26 23:04:34.373682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-26 23:04:34.373845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-26 23:04:34.373872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-26 23:04:34.374040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-26 23:04:34.374072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-26 23:04:34.374213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-26 23:04:34.374239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-26 23:04:34.374401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-26 23:04:34.374427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-26 23:04:34.374596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-26 23:04:34.374622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-26 23:04:34.374795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-26 23:04:34.374820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-26 23:04:34.374994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-26 23:04:34.375020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-26 23:04:34.375198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-26 23:04:34.375226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-26 23:04:34.375413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-26 23:04:34.375445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-26 23:04:34.375584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-26 23:04:34.375610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-26 23:04:34.375796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-26 23:04:34.375822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-26 23:04:34.375958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-26 23:04:34.375984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-26 23:04:34.376175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-26 23:04:34.376202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-26 23:04:34.376345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-26 23:04:34.376381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-26 23:04:34.376556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-26 23:04:34.376582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-26 23:04:34.376725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-26 23:04:34.376752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-26 23:04:34.376910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-26 23:04:34.376936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-26 23:04:34.377093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-26 23:04:34.377120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-26 23:04:34.377262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-26 23:04:34.377288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-26 23:04:34.377441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-26 23:04:34.377467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-26 23:04:34.377627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-26 23:04:34.377653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-26 23:04:34.377847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-26 23:04:34.377873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-26 23:04:34.378132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-26 23:04:34.378159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-26 23:04:34.378295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-26 23:04:34.378320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-26 23:04:34.378577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.378603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-26 23:04:34.378810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.378836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-26 23:04:34.378987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.379013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-26 23:04:34.379182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.379210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-26 23:04:34.379389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.379415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-26 23:04:34.379562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.379588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-26 23:04:34.379758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.379784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-26 23:04:34.379950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.379976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-26 23:04:34.380149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.380182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-26 23:04:34.380332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.380370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-26 23:04:34.380528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.380554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-26 23:04:34.380727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.380752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-26 23:04:34.381003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.381029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-26 23:04:34.381203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.381230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-26 23:04:34.381404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.381430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-26 23:04:34.381682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.381707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-26 23:04:34.381922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.381948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-26 23:04:34.382120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.382147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-26 23:04:34.382321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.382347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-26 23:04:34.382497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.382523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-26 23:04:34.382679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.382706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-26 23:04:34.382844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.382872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-26 23:04:34.383046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.383086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-26 23:04:34.383336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.383364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-26 23:04:34.383538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.383564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-26 23:04:34.383726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.383751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-26 23:04:34.383896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.383923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-26 23:04:34.384078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.384104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-26 23:04:34.384279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.384305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-26 23:04:34.384513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.384539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-26 23:04:34.384691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.384717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-26 23:04:34.384925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.384962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-26 23:04:34.385141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.385168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-26 23:04:34.385416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.385448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-26 23:04:34.385693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.385729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-26 23:04:34.385980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.386007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-26 23:04:34.386151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.386179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-26 23:04:34.386431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-26 23:04:34.386458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-26 23:04:34.386602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-26 23:04:34.386629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-26 23:04:34.386800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-26 23:04:34.386827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-26 23:04:34.386982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-26 23:04:34.387008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-26 23:04:34.387161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-26 23:04:34.387188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-26 23:04:34.387353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-26 23:04:34.387388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-26 23:04:34.387547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-26 23:04:34.387574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-26 23:04:34.387715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-26 23:04:34.387741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-26 23:04:34.387920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-26 23:04:34.387946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-26 23:04:34.388124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-26 23:04:34.388151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-26 23:04:34.388282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-26 23:04:34.388307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:42.102 [2024-07-26 23:04:34.388516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:42.102 [2024-07-26 23:04:34.388543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.102 [2024-07-26 23:04:34.388703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-26 23:04:34.388730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:42.102 [2024-07-26 23:04:34.388931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-26 23:04:34.388958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-26 23:04:34.389208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-26 23:04:34.389234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-26 23:04:34.389381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-26 23:04:34.389408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-26 23:04:34.389583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-26 23:04:34.389609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-26 23:04:34.389805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-26 23:04:34.389831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-26 23:04:34.389967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-26 23:04:34.389993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-26 23:04:34.390155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-26 23:04:34.390182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-26 23:04:34.390353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-26 23:04:34.390387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-26 23:04:34.390640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-26 23:04:34.390666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-26 23:04:34.390836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-26 23:04:34.390862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-26 23:04:34.391002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-26 23:04:34.391027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-26 23:04:34.391277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-26 23:04:34.391304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-26 23:04:34.391484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-26 23:04:34.391511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-26 23:04:34.391697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-26 23:04:34.391723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-26 23:04:34.391893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-26 23:04:34.391920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-26 23:04:34.392092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-26 23:04:34.392119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-26 23:04:34.392292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-26 23:04:34.392319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-26 23:04:34.392481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-26 23:04:34.392507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-26 23:04:34.392704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-26 23:04:34.392730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-26 23:04:34.392871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-26 23:04:34.392896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-26 23:04:34.393069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-26 23:04:34.393096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-26 23:04:34.393244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-26 23:04:34.393270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-26 23:04:34.393409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-26 23:04:34.393435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-26 23:04:34.393611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-26 23:04:34.393641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-26 23:04:34.393783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-26 23:04:34.393809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-26 23:04:34.393983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-26 23:04:34.394009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-26 23:04:34.394169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-26 23:04:34.394196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-26 23:04:34.394342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-26 23:04:34.394379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-26 23:04:34.394548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-26 23:04:34.394573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-26 23:04:34.394740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-26 23:04:34.394766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-26 23:04:34.394915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-26 23:04:34.394942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-26 23:04:34.395122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-26 23:04:34.395149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-26 23:04:34.395320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-26 23:04:34.395346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-26 23:04:34.395543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-26 23:04:34.395569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-26 23:04:34.395780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-26 23:04:34.395805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-26 23:04:34.395988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-26 23:04:34.396014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-26 23:04:34.396169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-26 23:04:34.396196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-26 23:04:34.396396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-26 23:04:34.396422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-26 23:04:34.396589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-26 23:04:34.396615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-26 23:04:34.396767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-26 23:04:34.396794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-26 23:04:34.396929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-26 23:04:34.396955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-26 23:04:34.397103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-26 23:04:34.397130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-26 23:04:34.397324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-26 23:04:34.397349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-26 23:04:34.397494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-26 23:04:34.397520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-26 23:04:34.397658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-26 23:04:34.397684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-26 23:04:34.397853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-26 23:04:34.397878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-26 23:04:34.398018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-26 23:04:34.398044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-26 23:04:34.398211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-26 23:04:34.398238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-26 23:04:34.398448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-26 23:04:34.398474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-26 23:04:34.398630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-26 23:04:34.398656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-26 23:04:34.398820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-26 23:04:34.398846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-26 23:04:34.398992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-26 23:04:34.399018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-26 23:04:34.399189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-26 23:04:34.399215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-26 23:04:34.399382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-26 23:04:34.399408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-26 23:04:34.399558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-26 23:04:34.399583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-26 23:04:34.399735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-26 23:04:34.399760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-26 23:04:34.399929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-26 23:04:34.399954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-26 23:04:34.400104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-26 23:04:34.400130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-26 23:04:34.400301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-26 23:04:34.400328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-26 23:04:34.400479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-26 23:04:34.400506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-26 23:04:34.400662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-26 23:04:34.400687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-26 23:04:34.400857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-26 23:04:34.400882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-26 23:04:34.401027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-26 23:04:34.401069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-26 23:04:34.401239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-26 23:04:34.401269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-26 23:04:34.401476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-26 23:04:34.401502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-26 23:04:34.401636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-26 23:04:34.401661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-26 23:04:34.401846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-26 23:04:34.401872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-26 23:04:34.402012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-26 23:04:34.402037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-26 23:04:34.402209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-26 23:04:34.402251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-26 23:04:34.402445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-26 23:04:34.402473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-26 23:04:34.402757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-26 23:04:34.402784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-26 23:04:34.402939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-26 23:04:34.402965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-26 23:04:34.403147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-26 23:04:34.403174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-26 23:04:34.403320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-26 23:04:34.403358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-26 23:04:34.403541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-26 23:04:34.403567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-26 23:04:34.403722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-26 23:04:34.403747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-26 23:04:34.403883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-26 23:04:34.403908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-26 23:04:34.404092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-26 23:04:34.404120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-26 23:04:34.404311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-26 23:04:34.404337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-26 23:04:34.404498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-26 23:04:34.404524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-26 23:04:34.404689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-26 23:04:34.404715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-26 23:04:34.404886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-26 23:04:34.404911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-26 23:04:34.405079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-26 23:04:34.405106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-26 23:04:34.405247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-26 23:04:34.405273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-26 23:04:34.405412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-26 23:04:34.405439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-26 23:04:34.405572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-26 23:04:34.405598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-26 23:04:34.405740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-26 23:04:34.405765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-26 23:04:34.405943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-26 23:04:34.405970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-26 23:04:34.406130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-26 23:04:34.406158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-26 23:04:34.406328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-26 23:04:34.406355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-26 23:04:34.406503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-26 23:04:34.406530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-26 23:04:34.406694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-26 23:04:34.406721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-26 23:04:34.406863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-26 23:04:34.406890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-26 23:04:34.407051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-26 23:04:34.407089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-26 23:04:34.407240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-26 23:04:34.407267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-26 23:04:34.407453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-26 23:04:34.407478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-26 23:04:34.407731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-26 23:04:34.407757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-26 23:04:34.407892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-26 23:04:34.407925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-26 23:04:34.408100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-26 23:04:34.408127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-26 23:04:34.408270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-26 23:04:34.408302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-26 23:04:34.408490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-26 23:04:34.408523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-26 23:04:34.408669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-26 23:04:34.408696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-26 23:04:34.408886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-26 23:04:34.408913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-26 23:04:34.409073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-26 23:04:34.409106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-26 23:04:34.409301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-26 23:04:34.409327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-26 23:04:34.409467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-26 23:04:34.409495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-26 23:04:34.409639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-26 23:04:34.409664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-26 23:04:34.409829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-26 23:04:34.409855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-26 23:04:34.410029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-26 23:04:34.410055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-26 23:04:34.410217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-26 23:04:34.410244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-26 23:04:34.410420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-26 23:04:34.410447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-26 23:04:34.410601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-26 23:04:34.410627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 Malloc0 00:34:42.105 [2024-07-26 23:04:34.410797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-26 23:04:34.410823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-26 23:04:34.410975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-26 23:04:34.411001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.105 [2024-07-26 23:04:34.411180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-26 23:04:34.411206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:42.105 [2024-07-26 23:04:34.411382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.105 [2024-07-26 23:04:34.411410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:42.105 [2024-07-26 23:04:34.411579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-26 23:04:34.411605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-26 23:04:34.411746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-26 23:04:34.411776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-26 23:04:34.411943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-26 23:04:34.411970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-26 23:04:34.412157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-26 23:04:34.412185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-26 23:04:34.412341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-26 23:04:34.412368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-26 23:04:34.412568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-26 23:04:34.412596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-26 23:04:34.412754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-26 23:04:34.412780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-26 23:04:34.412947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-26 23:04:34.412974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-26 23:04:34.413126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-26 23:04:34.413153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-26 23:04:34.413295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-26 23:04:34.413321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-26 23:04:34.413465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-26 23:04:34.413491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-26 23:04:34.413632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-26 23:04:34.413662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.413832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-26 23:04:34.413866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.414055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-26 23:04:34.414098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.414276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-26 23:04:34.414304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.414483] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:42.106 [2024-07-26 23:04:34.414497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-26 23:04:34.414525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.414699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-26 23:04:34.414724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.414920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-26 23:04:34.414946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.415097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-26 23:04:34.415125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.415286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-26 23:04:34.415312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.415477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-26 23:04:34.415504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.415672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-26 23:04:34.415699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.415854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-26 23:04:34.415883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.416070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-26 23:04:34.416098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.416257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-26 23:04:34.416288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.416457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-26 23:04:34.416489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.416661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-26 23:04:34.416687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.416842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-26 23:04:34.416869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.417045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-26 23:04:34.417082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.417223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-26 23:04:34.417248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.417464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-26 23:04:34.417506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.417671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-26 23:04:34.417697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.417883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-26 23:04:34.417908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.418083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-26 23:04:34.418109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.418248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-26 23:04:34.418274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.418454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-26 23:04:34.418480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.418628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-26 23:04:34.418653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.418796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-26 23:04:34.418822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.418981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-26 23:04:34.419006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.419179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-26 23:04:34.419205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.419353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-26 23:04:34.419379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.419521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-26 23:04:34.419546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.419716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-26 23:04:34.419741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.419888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-26 23:04:34.419914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.420057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-26 23:04:34.420089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.420235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-26 23:04:34.420267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.420426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-26 23:04:34.420454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.420659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-26 23:04:34.420686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.420829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-26 23:04:34.420856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-26 23:04:34.421018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-26 23:04:34.421057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-26 23:04:34.421243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-26 23:04:34.421271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd438000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-26 23:04:34.421426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-26 23:04:34.421453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-26 23:04:34.421596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-26 23:04:34.421626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-26 23:04:34.421833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-26 23:04:34.421859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-26 23:04:34.422028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-26 23:04:34.422057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-26 23:04:34.422240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-26 23:04:34.422265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-26 23:04:34.422401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-26 23:04:34.422426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-26 23:04:34.422637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-26 23:04:34.422662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.107 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:42.107 [2024-07-26 23:04:34.422850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-26 23:04:34.422876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.107 [2024-07-26 23:04:34.423041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:42.107 [2024-07-26 23:04:34.423072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-26 23:04:34.423212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-26 23:04:34.423237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-26 23:04:34.423397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-26 23:04:34.423422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-26 23:04:34.423589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-26 23:04:34.423614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-26 23:04:34.423765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-26 23:04:34.423791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-26 23:04:34.423931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-26 23:04:34.423957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-26 23:04:34.424127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-26 23:04:34.424152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-26 23:04:34.424295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-26 23:04:34.424320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-26 23:04:34.424505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-26 23:04:34.424530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-26 23:04:34.424684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-26 23:04:34.424709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-26 23:04:34.424870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-26 23:04:34.424895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-26 23:04:34.425070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-26 23:04:34.425096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-26 23:04:34.425255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-26 23:04:34.425281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-26 23:04:34.425447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-26 23:04:34.425472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-26 23:04:34.425622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-26 23:04:34.425661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-26 23:04:34.425820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-26 23:04:34.425848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-26 23:04:34.425993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-26 23:04:34.426019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-26 23:04:34.426216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-26 23:04:34.426243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-26 23:04:34.426381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-26 23:04:34.426407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-26 23:04:34.426586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-26 23:04:34.426612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-26 23:04:34.426761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-26 23:04:34.426790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-26 23:04:34.426957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-26 23:04:34.426987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-26 23:04:34.427144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-26 23:04:34.427172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-26 23:04:34.427377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-26 23:04:34.427406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-26 23:04:34.427548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-26 23:04:34.427575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-26 23:04:34.427723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-26 23:04:34.427750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-26 23:04:34.427911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-26 23:04:34.427938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-26 23:04:34.428085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-26 23:04:34.428111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-26 23:04:34.428271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-26 23:04:34.428296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-26 23:04:34.428442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-26 23:04:34.428468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-26 23:04:34.428606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-26 23:04:34.428631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-26 23:04:34.428799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-26 23:04:34.428825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-26 23:04:34.428975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-26 23:04:34.429004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-26 23:04:34.429212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-26 23:04:34.429239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-26 23:04:34.429379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-26 23:04:34.429405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-26 23:04:34.429595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-26 23:04:34.429622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-26 23:04:34.429757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-26 23:04:34.429783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-26 23:04:34.429961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-26 23:04:34.429988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-26 23:04:34.430162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-26 23:04:34.430189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-26 23:04:34.430330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-26 23:04:34.430356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-26 23:04:34.430492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-26 23:04:34.430517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-26 23:04:34.430663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-26 23:04:34.430688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.108 [2024-07-26 23:04:34.430851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-26 23:04:34.430876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:42.108 [2024-07-26 23:04:34.431048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.108 [2024-07-26 23:04:34.431079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:42.108 [2024-07-26 23:04:34.431264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-26 23:04:34.431296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-26 23:04:34.431457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-26 23:04:34.431487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-26 23:04:34.431636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-26 23:04:34.431663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-26 23:04:34.431821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-26 23:04:34.431847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-26 23:04:34.431997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-26 23:04:34.432023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-26 23:04:34.432175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-26 23:04:34.432203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-26 23:04:34.432339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-26 23:04:34.432365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-26 23:04:34.432499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-26 23:04:34.432525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-26 23:04:34.432670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-26 23:04:34.432701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-26 23:04:34.432857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-26 23:04:34.432884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-26 23:04:34.433023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-26 23:04:34.433049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-26 23:04:34.433194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-26 23:04:34.433220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-26 23:04:34.433365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-26 23:04:34.433391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-26 23:04:34.433536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-26 23:04:34.433565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-26 23:04:34.433696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-26 23:04:34.433722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-26 23:04:34.433878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-26 23:04:34.433910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-26 23:04:34.434081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-26 23:04:34.434108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-26 23:04:34.434247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-26 23:04:34.434275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-26 23:04:34.434441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-26 23:04:34.434467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-26 23:04:34.434645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-26 23:04:34.434671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-26 23:04:34.434840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-26 23:04:34.434866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-26 23:04:34.435026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-26 23:04:34.435053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-26 23:04:34.435224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-26 23:04:34.435249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-26 23:04:34.435390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-26 23:04:34.435415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-26 23:04:34.435556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-26 23:04:34.435582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-26 23:04:34.435750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-26 23:04:34.435775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-26 23:04:34.435910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-26 23:04:34.435935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-26 23:04:34.436087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-26 23:04:34.436114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-26 23:04:34.436263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-26 23:04:34.436294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-26 23:04:34.436437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-26 23:04:34.436463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-26 23:04:34.436663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-26 23:04:34.436690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-26 23:04:34.436823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-26 23:04:34.436849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-26 23:04:34.436993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-26 23:04:34.437018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-26 23:04:34.437164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-26 23:04:34.437189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-26 23:04:34.437338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-26 23:04:34.437363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-26 23:04:34.437532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-26 23:04:34.437558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-26 23:04:34.437692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-26 23:04:34.437717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-26 23:04:34.437854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-26 23:04:34.437879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-26 23:04:34.438051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-26 23:04:34.438083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-26 23:04:34.438229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-26 23:04:34.438257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd440000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-26 23:04:34.438446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-26 23:04:34.438482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-26 23:04:34.438649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-26 23:04:34.438679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd448000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.109 [2024-07-26 23:04:34.438864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-26 23:04:34.438889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:42.109 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.109 [2024-07-26 23:04:34.439024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-26 23:04:34.439051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:42.109 [2024-07-26 23:04:34.439192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-26 23:04:34.439217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-26 23:04:34.439347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-26 23:04:34.439373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-26 23:04:34.439518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-26 23:04:34.439543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-26 23:04:34.439675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-26 23:04:34.439700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-26 23:04:34.439868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-26 23:04:34.439893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-26 23:04:34.440052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-26 23:04:34.440084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-26 23:04:34.440239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-26 23:04:34.440264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-26 23:04:34.440404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-26 23:04:34.440429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-26 23:04:34.440600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-26 23:04:34.440626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-26 23:04:34.440790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-26 23:04:34.440815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-26 23:04:34.440954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-26 23:04:34.440979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-26 23:04:34.441142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-26 23:04:34.441168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-26 23:04:34.441339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-26 23:04:34.441364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-26 23:04:34.441502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-26 23:04:34.441528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-26 23:04:34.441703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-26 23:04:34.441728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-26 23:04:34.441886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-26 23:04:34.441911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-26 23:04:34.442045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-26 23:04:34.442076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-26 23:04:34.442228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-26 23:04:34.442253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-26 23:04:34.442389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-26 23:04:34.442414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-26 23:04:34.442569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-26 23:04:34.442594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da570 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-26 23:04:34.442723] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:42.110 [2024-07-26 23:04:34.445219] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.110 [2024-07-26 23:04:34.445398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.110 [2024-07-26 23:04:34.445431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.110 [2024-07-26 23:04:34.445449] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.110 [2024-07-26 23:04:34.445462] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.110 [2024-07-26 23:04:34.445499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.110 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:42.110 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.110 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:42.110 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.110 23:04:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3699625 00:34:42.110 [2024-07-26 23:04:34.455099] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.110 [2024-07-26 23:04:34.455257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.110 [2024-07-26 23:04:34.455284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.110 [2024-07-26 23:04:34.455299] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.110 [2024-07-26 23:04:34.455312] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.110 [2024-07-26 23:04:34.455343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-26 23:04:34.465121] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.110 [2024-07-26 23:04:34.465265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.110 [2024-07-26 23:04:34.465292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.110 [2024-07-26 23:04:34.465306] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.110 [2024-07-26 23:04:34.465320] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.110 [2024-07-26 23:04:34.465351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-26 23:04:34.475184] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.110 [2024-07-26 23:04:34.475340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.110 [2024-07-26 23:04:34.475370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.110 [2024-07-26 23:04:34.475385] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.110 [2024-07-26 23:04:34.475398] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.110 [2024-07-26 23:04:34.475431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-26 23:04:34.485154] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.110 [2024-07-26 23:04:34.485310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.110 [2024-07-26 23:04:34.485340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.110 [2024-07-26 23:04:34.485366] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.110 [2024-07-26 23:04:34.485380] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.110 [2024-07-26 23:04:34.485412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-26 23:04:34.495246] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.110 [2024-07-26 23:04:34.495389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.110 [2024-07-26 23:04:34.495419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.110 [2024-07-26 23:04:34.495435] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.110 [2024-07-26 23:04:34.495449] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.110 [2024-07-26 23:04:34.495479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-26 23:04:34.505169] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.110 [2024-07-26 23:04:34.505319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.110 [2024-07-26 23:04:34.505352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.110 [2024-07-26 23:04:34.505367] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.110 [2024-07-26 23:04:34.505381] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.110 [2024-07-26 23:04:34.505412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-26 23:04:34.515207] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.111 [2024-07-26 23:04:34.515366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.111 [2024-07-26 23:04:34.515402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.111 [2024-07-26 23:04:34.515417] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.111 [2024-07-26 23:04:34.515430] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.111 [2024-07-26 23:04:34.515461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-26 23:04:34.525173] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.111 [2024-07-26 23:04:34.525309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.111 [2024-07-26 23:04:34.525340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.111 [2024-07-26 23:04:34.525356] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.111 [2024-07-26 23:04:34.525369] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.111 [2024-07-26 23:04:34.525400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-26 23:04:34.535212] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.111 [2024-07-26 23:04:34.535354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.111 [2024-07-26 23:04:34.535381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.111 [2024-07-26 23:04:34.535396] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.111 [2024-07-26 23:04:34.535409] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.111 [2024-07-26 23:04:34.535439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-26 23:04:34.545242] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.111 [2024-07-26 23:04:34.545391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.111 [2024-07-26 23:04:34.545418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.111 [2024-07-26 23:04:34.545435] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.111 [2024-07-26 23:04:34.545448] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.111 [2024-07-26 23:04:34.545479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-26 23:04:34.555250] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.111 [2024-07-26 23:04:34.555394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.111 [2024-07-26 23:04:34.555421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.111 [2024-07-26 23:04:34.555435] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.111 [2024-07-26 23:04:34.555448] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.111 [2024-07-26 23:04:34.555478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-26 23:04:34.565317] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.111 [2024-07-26 23:04:34.565474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.111 [2024-07-26 23:04:34.565500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.111 [2024-07-26 23:04:34.565514] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.111 [2024-07-26 23:04:34.565527] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.111 [2024-07-26 23:04:34.565563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.371 [2024-07-26 23:04:34.575369] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.371 [2024-07-26 23:04:34.575529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.371 [2024-07-26 23:04:34.575555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.371 [2024-07-26 23:04:34.575570] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.371 [2024-07-26 23:04:34.575583] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.371 [2024-07-26 23:04:34.575613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.371 qpair failed and we were unable to recover it. 00:34:42.371 [2024-07-26 23:04:34.585369] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.371 [2024-07-26 23:04:34.585539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.371 [2024-07-26 23:04:34.585566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.371 [2024-07-26 23:04:34.585580] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.371 [2024-07-26 23:04:34.585593] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.371 [2024-07-26 23:04:34.585623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.371 qpair failed and we were unable to recover it. 00:34:42.371 [2024-07-26 23:04:34.595382] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.371 [2024-07-26 23:04:34.595533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.371 [2024-07-26 23:04:34.595559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.371 [2024-07-26 23:04:34.595573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.371 [2024-07-26 23:04:34.595587] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.371 [2024-07-26 23:04:34.595617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.371 qpair failed and we were unable to recover it. 00:34:42.371 [2024-07-26 23:04:34.605411] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.371 [2024-07-26 23:04:34.605551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.371 [2024-07-26 23:04:34.605578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.371 [2024-07-26 23:04:34.605592] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.371 [2024-07-26 23:04:34.605605] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.371 [2024-07-26 23:04:34.605635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.371 qpair failed and we were unable to recover it. 00:34:42.371 [2024-07-26 23:04:34.615437] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.371 [2024-07-26 23:04:34.615589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.371 [2024-07-26 23:04:34.615620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.371 [2024-07-26 23:04:34.615645] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.371 [2024-07-26 23:04:34.615658] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.372 [2024-07-26 23:04:34.615688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.372 qpair failed and we were unable to recover it. 00:34:42.372 [2024-07-26 23:04:34.625444] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.372 [2024-07-26 23:04:34.625595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.372 [2024-07-26 23:04:34.625620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.372 [2024-07-26 23:04:34.625635] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.372 [2024-07-26 23:04:34.625648] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.372 [2024-07-26 23:04:34.625677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.372 qpair failed and we were unable to recover it. 00:34:42.372 [2024-07-26 23:04:34.635493] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.372 [2024-07-26 23:04:34.635644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.372 [2024-07-26 23:04:34.635670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.372 [2024-07-26 23:04:34.635684] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.372 [2024-07-26 23:04:34.635698] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.372 [2024-07-26 23:04:34.635727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.372 qpair failed and we were unable to recover it. 00:34:42.372 [2024-07-26 23:04:34.645578] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.372 [2024-07-26 23:04:34.645716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.372 [2024-07-26 23:04:34.645743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.372 [2024-07-26 23:04:34.645757] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.372 [2024-07-26 23:04:34.645771] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.372 [2024-07-26 23:04:34.645812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.372 qpair failed and we were unable to recover it. 00:34:42.372 [2024-07-26 23:04:34.655644] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.372 [2024-07-26 23:04:34.655792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.372 [2024-07-26 23:04:34.655820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.372 [2024-07-26 23:04:34.655841] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.372 [2024-07-26 23:04:34.655855] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.372 [2024-07-26 23:04:34.655894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.372 qpair failed and we were unable to recover it. 00:34:42.372 [2024-07-26 23:04:34.665596] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.372 [2024-07-26 23:04:34.665737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.372 [2024-07-26 23:04:34.665763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.372 [2024-07-26 23:04:34.665777] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.372 [2024-07-26 23:04:34.665791] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.372 [2024-07-26 23:04:34.665821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.372 qpair failed and we were unable to recover it. 00:34:42.372 [2024-07-26 23:04:34.675657] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.372 [2024-07-26 23:04:34.675807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.372 [2024-07-26 23:04:34.675833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.372 [2024-07-26 23:04:34.675847] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.372 [2024-07-26 23:04:34.675861] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.372 [2024-07-26 23:04:34.675891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.372 qpair failed and we were unable to recover it. 00:34:42.372 [2024-07-26 23:04:34.685656] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.372 [2024-07-26 23:04:34.685802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.372 [2024-07-26 23:04:34.685828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.372 [2024-07-26 23:04:34.685841] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.372 [2024-07-26 23:04:34.685855] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.372 [2024-07-26 23:04:34.685885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.372 qpair failed and we were unable to recover it. 00:34:42.372 [2024-07-26 23:04:34.695700] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.372 [2024-07-26 23:04:34.695840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.372 [2024-07-26 23:04:34.695865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.372 [2024-07-26 23:04:34.695880] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.372 [2024-07-26 23:04:34.695893] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.372 [2024-07-26 23:04:34.695922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.372 qpair failed and we were unable to recover it. 00:34:42.372 [2024-07-26 23:04:34.705689] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.372 [2024-07-26 23:04:34.705832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.372 [2024-07-26 23:04:34.705863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.372 [2024-07-26 23:04:34.705878] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.372 [2024-07-26 23:04:34.705891] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.372 [2024-07-26 23:04:34.705921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.372 qpair failed and we were unable to recover it. 00:34:42.372 [2024-07-26 23:04:34.715710] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.372 [2024-07-26 23:04:34.715894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.372 [2024-07-26 23:04:34.715920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.372 [2024-07-26 23:04:34.715935] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.372 [2024-07-26 23:04:34.715947] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.372 [2024-07-26 23:04:34.715977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.372 qpair failed and we were unable to recover it. 00:34:42.372 [2024-07-26 23:04:34.725735] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.372 [2024-07-26 23:04:34.725877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.372 [2024-07-26 23:04:34.725903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.372 [2024-07-26 23:04:34.725917] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.372 [2024-07-26 23:04:34.725930] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.372 [2024-07-26 23:04:34.725960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.372 qpair failed and we were unable to recover it. 00:34:42.372 [2024-07-26 23:04:34.735772] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.372 [2024-07-26 23:04:34.735918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.372 [2024-07-26 23:04:34.735944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.372 [2024-07-26 23:04:34.735959] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.372 [2024-07-26 23:04:34.735972] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.372 [2024-07-26 23:04:34.736003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.372 qpair failed and we were unable to recover it. 00:34:42.372 [2024-07-26 23:04:34.745804] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.372 [2024-07-26 23:04:34.745972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.372 [2024-07-26 23:04:34.745998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.372 [2024-07-26 23:04:34.746012] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.372 [2024-07-26 23:04:34.746031] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.372 [2024-07-26 23:04:34.746069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.372 qpair failed and we were unable to recover it. 00:34:42.373 [2024-07-26 23:04:34.755833] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.373 [2024-07-26 23:04:34.755988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.373 [2024-07-26 23:04:34.756013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.373 [2024-07-26 23:04:34.756028] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.373 [2024-07-26 23:04:34.756041] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.373 [2024-07-26 23:04:34.756080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.373 qpair failed and we were unable to recover it. 00:34:42.373 [2024-07-26 23:04:34.765829] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.373 [2024-07-26 23:04:34.765977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.373 [2024-07-26 23:04:34.766004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.373 [2024-07-26 23:04:34.766018] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.373 [2024-07-26 23:04:34.766031] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.373 [2024-07-26 23:04:34.766067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.373 qpair failed and we were unable to recover it. 00:34:42.373 [2024-07-26 23:04:34.775888] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.373 [2024-07-26 23:04:34.776027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.373 [2024-07-26 23:04:34.776052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.373 [2024-07-26 23:04:34.776076] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.373 [2024-07-26 23:04:34.776090] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.373 [2024-07-26 23:04:34.776121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.373 qpair failed and we were unable to recover it. 00:34:42.373 [2024-07-26 23:04:34.785887] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.373 [2024-07-26 23:04:34.786033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.373 [2024-07-26 23:04:34.786067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.373 [2024-07-26 23:04:34.786084] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.373 [2024-07-26 23:04:34.786100] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.373 [2024-07-26 23:04:34.786131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.373 qpair failed and we were unable to recover it. 00:34:42.373 [2024-07-26 23:04:34.795917] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.373 [2024-07-26 23:04:34.796084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.373 [2024-07-26 23:04:34.796110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.373 [2024-07-26 23:04:34.796125] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.373 [2024-07-26 23:04:34.796139] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.373 [2024-07-26 23:04:34.796169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.373 qpair failed and we were unable to recover it. 00:34:42.373 [2024-07-26 23:04:34.805980] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.373 [2024-07-26 23:04:34.806178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.373 [2024-07-26 23:04:34.806205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.373 [2024-07-26 23:04:34.806219] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.373 [2024-07-26 23:04:34.806232] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.373 [2024-07-26 23:04:34.806263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.373 qpair failed and we were unable to recover it. 00:34:42.373 [2024-07-26 23:04:34.815977] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.373 [2024-07-26 23:04:34.816138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.373 [2024-07-26 23:04:34.816166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.373 [2024-07-26 23:04:34.816181] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.373 [2024-07-26 23:04:34.816194] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.373 [2024-07-26 23:04:34.816224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.373 qpair failed and we were unable to recover it. 00:34:42.373 [2024-07-26 23:04:34.826002] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.373 [2024-07-26 23:04:34.826162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.373 [2024-07-26 23:04:34.826189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.373 [2024-07-26 23:04:34.826204] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.373 [2024-07-26 23:04:34.826217] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.373 [2024-07-26 23:04:34.826248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.373 qpair failed and we were unable to recover it. 00:34:42.373 [2024-07-26 23:04:34.836033] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.373 [2024-07-26 23:04:34.836184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.373 [2024-07-26 23:04:34.836210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.373 [2024-07-26 23:04:34.836231] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.373 [2024-07-26 23:04:34.836246] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.373 [2024-07-26 23:04:34.836276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.373 qpair failed and we were unable to recover it. 00:34:42.373 [2024-07-26 23:04:34.846102] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.373 [2024-07-26 23:04:34.846271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.373 [2024-07-26 23:04:34.846298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.373 [2024-07-26 23:04:34.846312] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.373 [2024-07-26 23:04:34.846325] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.373 [2024-07-26 23:04:34.846355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.373 qpair failed and we were unable to recover it. 00:34:42.373 [2024-07-26 23:04:34.856074] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.373 [2024-07-26 23:04:34.856216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.373 [2024-07-26 23:04:34.856242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.373 [2024-07-26 23:04:34.856256] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.373 [2024-07-26 23:04:34.856269] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.373 [2024-07-26 23:04:34.856299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.373 qpair failed and we were unable to recover it. 00:34:42.373 [2024-07-26 23:04:34.866121] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.373 [2024-07-26 23:04:34.866257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.373 [2024-07-26 23:04:34.866284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.373 [2024-07-26 23:04:34.866298] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.373 [2024-07-26 23:04:34.866312] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.373 [2024-07-26 23:04:34.866365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.373 qpair failed and we were unable to recover it. 00:34:42.634 [2024-07-26 23:04:34.876156] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.634 [2024-07-26 23:04:34.876324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.634 [2024-07-26 23:04:34.876353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.634 [2024-07-26 23:04:34.876371] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.634 [2024-07-26 23:04:34.876384] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.634 [2024-07-26 23:04:34.876416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.634 qpair failed and we were unable to recover it. 00:34:42.634 [2024-07-26 23:04:34.886150] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.634 [2024-07-26 23:04:34.886344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.634 [2024-07-26 23:04:34.886370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.634 [2024-07-26 23:04:34.886385] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.634 [2024-07-26 23:04:34.886399] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.634 [2024-07-26 23:04:34.886428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.634 qpair failed and we were unable to recover it. 00:34:42.634 [2024-07-26 23:04:34.896176] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.634 [2024-07-26 23:04:34.896316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.634 [2024-07-26 23:04:34.896341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.634 [2024-07-26 23:04:34.896356] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.634 [2024-07-26 23:04:34.896369] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.634 [2024-07-26 23:04:34.896398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.634 qpair failed and we were unable to recover it. 00:34:42.634 [2024-07-26 23:04:34.906211] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.634 [2024-07-26 23:04:34.906355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.634 [2024-07-26 23:04:34.906381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.634 [2024-07-26 23:04:34.906395] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.634 [2024-07-26 23:04:34.906408] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.634 [2024-07-26 23:04:34.906438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.634 qpair failed and we were unable to recover it. 00:34:42.634 [2024-07-26 23:04:34.916274] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.634 [2024-07-26 23:04:34.916421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.634 [2024-07-26 23:04:34.916447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.634 [2024-07-26 23:04:34.916461] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.634 [2024-07-26 23:04:34.916474] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.634 [2024-07-26 23:04:34.916504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.634 qpair failed and we were unable to recover it. 00:34:42.634 [2024-07-26 23:04:34.926280] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.634 [2024-07-26 23:04:34.926423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.634 [2024-07-26 23:04:34.926448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.634 [2024-07-26 23:04:34.926469] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.634 [2024-07-26 23:04:34.926482] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.634 [2024-07-26 23:04:34.926513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.634 qpair failed and we were unable to recover it. 00:34:42.634 [2024-07-26 23:04:34.936351] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.634 [2024-07-26 23:04:34.936499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.634 [2024-07-26 23:04:34.936525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.634 [2024-07-26 23:04:34.936539] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.634 [2024-07-26 23:04:34.936553] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.634 [2024-07-26 23:04:34.936582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.634 qpair failed and we were unable to recover it. 00:34:42.634 [2024-07-26 23:04:34.946352] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.634 [2024-07-26 23:04:34.946492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.634 [2024-07-26 23:04:34.946518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.634 [2024-07-26 23:04:34.946532] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.634 [2024-07-26 23:04:34.946545] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.634 [2024-07-26 23:04:34.946575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.634 qpair failed and we were unable to recover it. 00:34:42.634 [2024-07-26 23:04:34.956422] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.634 [2024-07-26 23:04:34.956572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.634 [2024-07-26 23:04:34.956597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.634 [2024-07-26 23:04:34.956611] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.634 [2024-07-26 23:04:34.956624] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.634 [2024-07-26 23:04:34.956654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.634 qpair failed and we were unable to recover it. 00:34:42.634 [2024-07-26 23:04:34.966392] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.635 [2024-07-26 23:04:34.966540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.635 [2024-07-26 23:04:34.966565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.635 [2024-07-26 23:04:34.966579] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.635 [2024-07-26 23:04:34.966593] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.635 [2024-07-26 23:04:34.966623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.635 qpair failed and we were unable to recover it. 00:34:42.635 [2024-07-26 23:04:34.976422] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.635 [2024-07-26 23:04:34.976560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.635 [2024-07-26 23:04:34.976584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.635 [2024-07-26 23:04:34.976599] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.635 [2024-07-26 23:04:34.976612] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.635 [2024-07-26 23:04:34.976641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.635 qpair failed and we were unable to recover it. 00:34:42.635 [2024-07-26 23:04:34.986440] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.635 [2024-07-26 23:04:34.986595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.635 [2024-07-26 23:04:34.986620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.635 [2024-07-26 23:04:34.986634] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.635 [2024-07-26 23:04:34.986648] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.635 [2024-07-26 23:04:34.986678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.635 qpair failed and we were unable to recover it. 00:34:42.635 [2024-07-26 23:04:34.996503] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.635 [2024-07-26 23:04:34.996647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.635 [2024-07-26 23:04:34.996673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.635 [2024-07-26 23:04:34.996687] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.635 [2024-07-26 23:04:34.996700] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.635 [2024-07-26 23:04:34.996730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.635 qpair failed and we were unable to recover it. 00:34:42.635 [2024-07-26 23:04:35.006510] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.635 [2024-07-26 23:04:35.006650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.635 [2024-07-26 23:04:35.006676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.635 [2024-07-26 23:04:35.006690] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.635 [2024-07-26 23:04:35.006704] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.635 [2024-07-26 23:04:35.006734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.635 qpair failed and we were unable to recover it. 00:34:42.635 [2024-07-26 23:04:35.016577] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.635 [2024-07-26 23:04:35.016736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.635 [2024-07-26 23:04:35.016766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.635 [2024-07-26 23:04:35.016781] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.635 [2024-07-26 23:04:35.016794] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.635 [2024-07-26 23:04:35.016824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.635 qpair failed and we were unable to recover it. 00:34:42.635 [2024-07-26 23:04:35.026575] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.635 [2024-07-26 23:04:35.026716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.635 [2024-07-26 23:04:35.026741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.635 [2024-07-26 23:04:35.026756] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.635 [2024-07-26 23:04:35.026769] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.635 [2024-07-26 23:04:35.026798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.635 qpair failed and we were unable to recover it. 00:34:42.635 [2024-07-26 23:04:35.036671] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.635 [2024-07-26 23:04:35.036840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.635 [2024-07-26 23:04:35.036865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.635 [2024-07-26 23:04:35.036879] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.635 [2024-07-26 23:04:35.036891] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.635 [2024-07-26 23:04:35.036920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.635 qpair failed and we were unable to recover it. 00:34:42.635 [2024-07-26 23:04:35.046611] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.635 [2024-07-26 23:04:35.046760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.635 [2024-07-26 23:04:35.046786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.635 [2024-07-26 23:04:35.046801] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.635 [2024-07-26 23:04:35.046814] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.635 [2024-07-26 23:04:35.046845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.635 qpair failed and we were unable to recover it. 00:34:42.635 [2024-07-26 23:04:35.056660] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.635 [2024-07-26 23:04:35.056802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.635 [2024-07-26 23:04:35.056828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.635 [2024-07-26 23:04:35.056842] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.635 [2024-07-26 23:04:35.056856] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.635 [2024-07-26 23:04:35.056891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.635 qpair failed and we were unable to recover it. 00:34:42.635 [2024-07-26 23:04:35.066693] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.635 [2024-07-26 23:04:35.066833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.635 [2024-07-26 23:04:35.066859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.635 [2024-07-26 23:04:35.066874] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.635 [2024-07-26 23:04:35.066887] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.635 [2024-07-26 23:04:35.066929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.635 qpair failed and we were unable to recover it. 00:34:42.635 [2024-07-26 23:04:35.076775] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.635 [2024-07-26 23:04:35.076972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.635 [2024-07-26 23:04:35.076996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.635 [2024-07-26 23:04:35.077010] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.635 [2024-07-26 23:04:35.077022] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.635 [2024-07-26 23:04:35.077051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.635 qpair failed and we were unable to recover it. 00:34:42.635 [2024-07-26 23:04:35.086839] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.635 [2024-07-26 23:04:35.087022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.635 [2024-07-26 23:04:35.087047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.635 [2024-07-26 23:04:35.087068] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.635 [2024-07-26 23:04:35.087083] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.635 [2024-07-26 23:04:35.087126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.635 qpair failed and we were unable to recover it. 00:34:42.635 [2024-07-26 23:04:35.096797] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.635 [2024-07-26 23:04:35.096941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.636 [2024-07-26 23:04:35.096966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.636 [2024-07-26 23:04:35.096980] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.636 [2024-07-26 23:04:35.096993] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.636 [2024-07-26 23:04:35.097024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.636 qpair failed and we were unable to recover it. 00:34:42.636 [2024-07-26 23:04:35.106790] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.636 [2024-07-26 23:04:35.106943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.636 [2024-07-26 23:04:35.106974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.636 [2024-07-26 23:04:35.106989] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.636 [2024-07-26 23:04:35.107002] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.636 [2024-07-26 23:04:35.107032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.636 qpair failed and we were unable to recover it. 00:34:42.636 [2024-07-26 23:04:35.116909] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.636 [2024-07-26 23:04:35.117079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.636 [2024-07-26 23:04:35.117105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.636 [2024-07-26 23:04:35.117119] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.636 [2024-07-26 23:04:35.117133] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.636 [2024-07-26 23:04:35.117163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.636 qpair failed and we were unable to recover it. 00:34:42.636 [2024-07-26 23:04:35.126896] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.636 [2024-07-26 23:04:35.127076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.636 [2024-07-26 23:04:35.127106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.636 [2024-07-26 23:04:35.127122] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.636 [2024-07-26 23:04:35.127135] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.636 [2024-07-26 23:04:35.127166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.636 qpair failed and we were unable to recover it. 00:34:42.897 [2024-07-26 23:04:35.136956] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.897 [2024-07-26 23:04:35.137130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.897 [2024-07-26 23:04:35.137159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.897 [2024-07-26 23:04:35.137174] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.897 [2024-07-26 23:04:35.137188] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.897 [2024-07-26 23:04:35.137221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.897 qpair failed and we were unable to recover it. 00:34:42.897 [2024-07-26 23:04:35.147036] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.897 [2024-07-26 23:04:35.147180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.897 [2024-07-26 23:04:35.147207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.897 [2024-07-26 23:04:35.147222] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.897 [2024-07-26 23:04:35.147241] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.897 [2024-07-26 23:04:35.147285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.897 qpair failed and we were unable to recover it. 00:34:42.897 [2024-07-26 23:04:35.156984] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.897 [2024-07-26 23:04:35.157180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.897 [2024-07-26 23:04:35.157206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.897 [2024-07-26 23:04:35.157220] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.897 [2024-07-26 23:04:35.157234] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.897 [2024-07-26 23:04:35.157265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.897 qpair failed and we were unable to recover it. 00:34:42.897 [2024-07-26 23:04:35.166969] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.897 [2024-07-26 23:04:35.167112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.897 [2024-07-26 23:04:35.167139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.897 [2024-07-26 23:04:35.167153] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.897 [2024-07-26 23:04:35.167165] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.897 [2024-07-26 23:04:35.167196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.897 qpair failed and we were unable to recover it. 00:34:42.897 [2024-07-26 23:04:35.177024] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.897 [2024-07-26 23:04:35.177181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.897 [2024-07-26 23:04:35.177207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.897 [2024-07-26 23:04:35.177221] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.897 [2024-07-26 23:04:35.177234] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.897 [2024-07-26 23:04:35.177264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.897 qpair failed and we were unable to recover it. 00:34:42.897 [2024-07-26 23:04:35.187014] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.897 [2024-07-26 23:04:35.187162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.897 [2024-07-26 23:04:35.187188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.897 [2024-07-26 23:04:35.187202] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.897 [2024-07-26 23:04:35.187215] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.897 [2024-07-26 23:04:35.187245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.897 qpair failed and we were unable to recover it. 00:34:42.897 [2024-07-26 23:04:35.197078] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.897 [2024-07-26 23:04:35.197226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.897 [2024-07-26 23:04:35.197252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.897 [2024-07-26 23:04:35.197266] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.897 [2024-07-26 23:04:35.197280] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.897 [2024-07-26 23:04:35.197309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.897 qpair failed and we were unable to recover it. 00:34:42.897 [2024-07-26 23:04:35.207123] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.897 [2024-07-26 23:04:35.207307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.897 [2024-07-26 23:04:35.207332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.897 [2024-07-26 23:04:35.207346] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.897 [2024-07-26 23:04:35.207358] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.897 [2024-07-26 23:04:35.207388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.897 qpair failed and we were unable to recover it. 00:34:42.897 [2024-07-26 23:04:35.217130] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.897 [2024-07-26 23:04:35.217272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.897 [2024-07-26 23:04:35.217298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.897 [2024-07-26 23:04:35.217313] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.897 [2024-07-26 23:04:35.217326] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.897 [2024-07-26 23:04:35.217355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.897 qpair failed and we were unable to recover it. 00:34:42.897 [2024-07-26 23:04:35.227148] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.897 [2024-07-26 23:04:35.227294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.897 [2024-07-26 23:04:35.227319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.897 [2024-07-26 23:04:35.227334] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.897 [2024-07-26 23:04:35.227347] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.897 [2024-07-26 23:04:35.227377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.897 qpair failed and we were unable to recover it. 00:34:42.897 [2024-07-26 23:04:35.237230] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.897 [2024-07-26 23:04:35.237379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.897 [2024-07-26 23:04:35.237406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.897 [2024-07-26 23:04:35.237425] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.897 [2024-07-26 23:04:35.237439] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.897 [2024-07-26 23:04:35.237470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.897 qpair failed and we were unable to recover it. 00:34:42.897 [2024-07-26 23:04:35.247222] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.897 [2024-07-26 23:04:35.247365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.897 [2024-07-26 23:04:35.247391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.897 [2024-07-26 23:04:35.247405] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.897 [2024-07-26 23:04:35.247418] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.897 [2024-07-26 23:04:35.247448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.897 qpair failed and we were unable to recover it. 00:34:42.897 [2024-07-26 23:04:35.257402] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.897 [2024-07-26 23:04:35.257559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.897 [2024-07-26 23:04:35.257585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.898 [2024-07-26 23:04:35.257599] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.898 [2024-07-26 23:04:35.257612] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.898 [2024-07-26 23:04:35.257642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.898 qpair failed and we were unable to recover it. 00:34:42.898 [2024-07-26 23:04:35.267285] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.898 [2024-07-26 23:04:35.267428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.898 [2024-07-26 23:04:35.267453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.898 [2024-07-26 23:04:35.267468] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.898 [2024-07-26 23:04:35.267481] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.898 [2024-07-26 23:04:35.267510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.898 qpair failed and we were unable to recover it. 00:34:42.898 [2024-07-26 23:04:35.277334] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.898 [2024-07-26 23:04:35.277479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.898 [2024-07-26 23:04:35.277504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.898 [2024-07-26 23:04:35.277518] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.898 [2024-07-26 23:04:35.277532] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.898 [2024-07-26 23:04:35.277561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.898 qpair failed and we were unable to recover it. 00:34:42.898 [2024-07-26 23:04:35.287467] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.898 [2024-07-26 23:04:35.287609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.898 [2024-07-26 23:04:35.287635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.898 [2024-07-26 23:04:35.287649] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.898 [2024-07-26 23:04:35.287662] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.898 [2024-07-26 23:04:35.287704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.898 qpair failed and we were unable to recover it. 00:34:42.898 [2024-07-26 23:04:35.297323] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.898 [2024-07-26 23:04:35.297466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.898 [2024-07-26 23:04:35.297492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.898 [2024-07-26 23:04:35.297506] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.898 [2024-07-26 23:04:35.297519] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.898 [2024-07-26 23:04:35.297550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.898 qpair failed and we were unable to recover it. 00:34:42.898 [2024-07-26 23:04:35.307392] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.898 [2024-07-26 23:04:35.307535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.898 [2024-07-26 23:04:35.307561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.898 [2024-07-26 23:04:35.307575] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.898 [2024-07-26 23:04:35.307588] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.898 [2024-07-26 23:04:35.307620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.898 qpair failed and we were unable to recover it. 00:34:42.898 [2024-07-26 23:04:35.317421] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.898 [2024-07-26 23:04:35.317569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.898 [2024-07-26 23:04:35.317595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.898 [2024-07-26 23:04:35.317609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.898 [2024-07-26 23:04:35.317621] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.898 [2024-07-26 23:04:35.317651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.898 qpair failed and we were unable to recover it. 00:34:42.898 [2024-07-26 23:04:35.327408] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.898 [2024-07-26 23:04:35.327545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.898 [2024-07-26 23:04:35.327571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.898 [2024-07-26 23:04:35.327591] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.898 [2024-07-26 23:04:35.327605] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.898 [2024-07-26 23:04:35.327634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.898 qpair failed and we were unable to recover it. 00:34:42.898 [2024-07-26 23:04:35.337460] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.898 [2024-07-26 23:04:35.337605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.898 [2024-07-26 23:04:35.337630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.898 [2024-07-26 23:04:35.337644] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.898 [2024-07-26 23:04:35.337656] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.898 [2024-07-26 23:04:35.337686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.898 qpair failed and we were unable to recover it. 00:34:42.898 [2024-07-26 23:04:35.347468] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.898 [2024-07-26 23:04:35.347620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.898 [2024-07-26 23:04:35.347646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.898 [2024-07-26 23:04:35.347660] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.898 [2024-07-26 23:04:35.347674] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.898 [2024-07-26 23:04:35.347703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.898 qpair failed and we were unable to recover it. 00:34:42.898 [2024-07-26 23:04:35.357540] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.898 [2024-07-26 23:04:35.357682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.898 [2024-07-26 23:04:35.357708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.898 [2024-07-26 23:04:35.357722] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.898 [2024-07-26 23:04:35.357735] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.898 [2024-07-26 23:04:35.357765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.898 qpair failed and we were unable to recover it. 00:34:42.898 [2024-07-26 23:04:35.367613] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.898 [2024-07-26 23:04:35.367753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.898 [2024-07-26 23:04:35.367779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.898 [2024-07-26 23:04:35.367793] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.898 [2024-07-26 23:04:35.367806] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.898 [2024-07-26 23:04:35.367836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.898 qpair failed and we were unable to recover it. 00:34:42.898 [2024-07-26 23:04:35.377573] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.898 [2024-07-26 23:04:35.377753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.898 [2024-07-26 23:04:35.377779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.898 [2024-07-26 23:04:35.377793] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.898 [2024-07-26 23:04:35.377807] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.898 [2024-07-26 23:04:35.377838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.898 qpair failed and we were unable to recover it. 00:34:42.898 [2024-07-26 23:04:35.387597] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.898 [2024-07-26 23:04:35.387746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.898 [2024-07-26 23:04:35.387772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.899 [2024-07-26 23:04:35.387786] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.899 [2024-07-26 23:04:35.387799] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.899 [2024-07-26 23:04:35.387830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.899 qpair failed and we were unable to recover it. 00:34:42.899 [2024-07-26 23:04:35.397735] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.899 [2024-07-26 23:04:35.397880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.899 [2024-07-26 23:04:35.397907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.899 [2024-07-26 23:04:35.397921] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.899 [2024-07-26 23:04:35.397934] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:42.899 [2024-07-26 23:04:35.397975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.899 qpair failed and we were unable to recover it. 00:34:43.158 [2024-07-26 23:04:35.407693] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.158 [2024-07-26 23:04:35.407858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.158 [2024-07-26 23:04:35.407884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.158 [2024-07-26 23:04:35.407898] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.158 [2024-07-26 23:04:35.407912] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.158 [2024-07-26 23:04:35.407941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.158 qpair failed and we were unable to recover it. 00:34:43.158 [2024-07-26 23:04:35.417696] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.158 [2024-07-26 23:04:35.417858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.158 [2024-07-26 23:04:35.417892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.158 [2024-07-26 23:04:35.417907] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.158 [2024-07-26 23:04:35.417921] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.158 [2024-07-26 23:04:35.417950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.158 qpair failed and we were unable to recover it. 00:34:43.158 [2024-07-26 23:04:35.427717] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.158 [2024-07-26 23:04:35.427918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.158 [2024-07-26 23:04:35.427944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.158 [2024-07-26 23:04:35.427958] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.158 [2024-07-26 23:04:35.427971] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.158 [2024-07-26 23:04:35.428000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.158 qpair failed and we were unable to recover it. 00:34:43.158 [2024-07-26 23:04:35.437776] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.158 [2024-07-26 23:04:35.437929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.158 [2024-07-26 23:04:35.437955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.158 [2024-07-26 23:04:35.437969] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.158 [2024-07-26 23:04:35.437983] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.158 [2024-07-26 23:04:35.438024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.158 qpair failed and we were unable to recover it. 00:34:43.158 [2024-07-26 23:04:35.447761] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.158 [2024-07-26 23:04:35.447909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.158 [2024-07-26 23:04:35.447935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.158 [2024-07-26 23:04:35.447949] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.158 [2024-07-26 23:04:35.447962] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.159 [2024-07-26 23:04:35.447993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.159 qpair failed and we were unable to recover it. 00:34:43.159 [2024-07-26 23:04:35.457794] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.159 [2024-07-26 23:04:35.457950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.159 [2024-07-26 23:04:35.457975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.159 [2024-07-26 23:04:35.457990] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.159 [2024-07-26 23:04:35.458003] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.159 [2024-07-26 23:04:35.458039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.159 qpair failed and we were unable to recover it. 00:34:43.159 [2024-07-26 23:04:35.467922] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.159 [2024-07-26 23:04:35.468069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.159 [2024-07-26 23:04:35.468095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.159 [2024-07-26 23:04:35.468109] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.159 [2024-07-26 23:04:35.468122] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.159 [2024-07-26 23:04:35.468152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.159 qpair failed and we were unable to recover it. 00:34:43.159 [2024-07-26 23:04:35.477981] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.159 [2024-07-26 23:04:35.478137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.159 [2024-07-26 23:04:35.478164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.159 [2024-07-26 23:04:35.478178] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.159 [2024-07-26 23:04:35.478191] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.159 [2024-07-26 23:04:35.478234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.159 qpair failed and we were unable to recover it. 00:34:43.159 [2024-07-26 23:04:35.487904] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.159 [2024-07-26 23:04:35.488046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.159 [2024-07-26 23:04:35.488079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.159 [2024-07-26 23:04:35.488094] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.159 [2024-07-26 23:04:35.488107] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.159 [2024-07-26 23:04:35.488138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.159 qpair failed and we were unable to recover it. 00:34:43.159 [2024-07-26 23:04:35.497905] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.159 [2024-07-26 23:04:35.498050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.159 [2024-07-26 23:04:35.498082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.159 [2024-07-26 23:04:35.498097] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.159 [2024-07-26 23:04:35.498110] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.159 [2024-07-26 23:04:35.498140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.159 qpair failed and we were unable to recover it. 00:34:43.159 [2024-07-26 23:04:35.507959] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.159 [2024-07-26 23:04:35.508105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.159 [2024-07-26 23:04:35.508238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.159 [2024-07-26 23:04:35.508254] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.159 [2024-07-26 23:04:35.508267] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.159 [2024-07-26 23:04:35.508297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.159 qpair failed and we were unable to recover it. 00:34:43.159 [2024-07-26 23:04:35.518003] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.159 [2024-07-26 23:04:35.518153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.159 [2024-07-26 23:04:35.518179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.159 [2024-07-26 23:04:35.518193] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.159 [2024-07-26 23:04:35.518206] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.159 [2024-07-26 23:04:35.518236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.159 qpair failed and we were unable to recover it. 00:34:43.159 [2024-07-26 23:04:35.527998] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.159 [2024-07-26 23:04:35.528147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.159 [2024-07-26 23:04:35.528172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.159 [2024-07-26 23:04:35.528186] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.159 [2024-07-26 23:04:35.528200] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.159 [2024-07-26 23:04:35.528230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.159 qpair failed and we were unable to recover it. 00:34:43.159 [2024-07-26 23:04:35.538053] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.159 [2024-07-26 23:04:35.538210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.159 [2024-07-26 23:04:35.538236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.159 [2024-07-26 23:04:35.538250] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.159 [2024-07-26 23:04:35.538263] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.159 [2024-07-26 23:04:35.538307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.159 qpair failed and we were unable to recover it. 00:34:43.159 [2024-07-26 23:04:35.548055] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.159 [2024-07-26 23:04:35.548211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.159 [2024-07-26 23:04:35.548237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.159 [2024-07-26 23:04:35.548252] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.159 [2024-07-26 23:04:35.548270] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.159 [2024-07-26 23:04:35.548303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.159 qpair failed and we were unable to recover it. 00:34:43.159 [2024-07-26 23:04:35.558187] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.159 [2024-07-26 23:04:35.558338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.159 [2024-07-26 23:04:35.558373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.159 [2024-07-26 23:04:35.558387] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.159 [2024-07-26 23:04:35.558401] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.159 [2024-07-26 23:04:35.558442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.159 qpair failed and we were unable to recover it. 00:34:43.159 [2024-07-26 23:04:35.568146] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.159 [2024-07-26 23:04:35.568296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.159 [2024-07-26 23:04:35.568322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.159 [2024-07-26 23:04:35.568336] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.159 [2024-07-26 23:04:35.568349] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.159 [2024-07-26 23:04:35.568382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.159 qpair failed and we were unable to recover it. 00:34:43.159 [2024-07-26 23:04:35.578171] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.159 [2024-07-26 23:04:35.578328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.159 [2024-07-26 23:04:35.578356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.159 [2024-07-26 23:04:35.578370] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.159 [2024-07-26 23:04:35.578383] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.160 [2024-07-26 23:04:35.578414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.160 qpair failed and we were unable to recover it. 00:34:43.160 [2024-07-26 23:04:35.588171] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.160 [2024-07-26 23:04:35.588311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.160 [2024-07-26 23:04:35.588339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.160 [2024-07-26 23:04:35.588353] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.160 [2024-07-26 23:04:35.588366] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.160 [2024-07-26 23:04:35.588406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.160 qpair failed and we were unable to recover it. 00:34:43.160 [2024-07-26 23:04:35.598227] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.160 [2024-07-26 23:04:35.598384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.160 [2024-07-26 23:04:35.598411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.160 [2024-07-26 23:04:35.598425] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.160 [2024-07-26 23:04:35.598438] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.160 [2024-07-26 23:04:35.598468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.160 qpair failed and we were unable to recover it. 00:34:43.160 [2024-07-26 23:04:35.608239] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.160 [2024-07-26 23:04:35.608386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.160 [2024-07-26 23:04:35.608412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.160 [2024-07-26 23:04:35.608426] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.160 [2024-07-26 23:04:35.608439] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.160 [2024-07-26 23:04:35.608471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.160 qpair failed and we were unable to recover it. 00:34:43.160 [2024-07-26 23:04:35.618303] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.160 [2024-07-26 23:04:35.618441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.160 [2024-07-26 23:04:35.618469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.160 [2024-07-26 23:04:35.618484] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.160 [2024-07-26 23:04:35.618497] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.160 [2024-07-26 23:04:35.618528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.160 qpair failed and we were unable to recover it. 00:34:43.160 [2024-07-26 23:04:35.628333] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.160 [2024-07-26 23:04:35.628478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.160 [2024-07-26 23:04:35.628505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.160 [2024-07-26 23:04:35.628523] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.160 [2024-07-26 23:04:35.628537] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.160 [2024-07-26 23:04:35.628567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.160 qpair failed and we were unable to recover it. 00:34:43.160 [2024-07-26 23:04:35.638394] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.160 [2024-07-26 23:04:35.638564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.160 [2024-07-26 23:04:35.638590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.160 [2024-07-26 23:04:35.638605] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.160 [2024-07-26 23:04:35.638623] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.160 [2024-07-26 23:04:35.638669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.160 qpair failed and we were unable to recover it. 00:34:43.160 [2024-07-26 23:04:35.648394] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.160 [2024-07-26 23:04:35.648544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.160 [2024-07-26 23:04:35.648571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.160 [2024-07-26 23:04:35.648586] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.160 [2024-07-26 23:04:35.648599] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.160 [2024-07-26 23:04:35.648629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.160 qpair failed and we were unable to recover it. 00:34:43.160 [2024-07-26 23:04:35.658386] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.160 [2024-07-26 23:04:35.658547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.160 [2024-07-26 23:04:35.658574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.160 [2024-07-26 23:04:35.658589] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.160 [2024-07-26 23:04:35.658602] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.160 [2024-07-26 23:04:35.658638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.160 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-26 23:04:35.668418] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.419 [2024-07-26 23:04:35.668564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.419 [2024-07-26 23:04:35.668591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.419 [2024-07-26 23:04:35.668605] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.419 [2024-07-26 23:04:35.668618] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.419 [2024-07-26 23:04:35.668650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-26 23:04:35.678511] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.419 [2024-07-26 23:04:35.678658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.419 [2024-07-26 23:04:35.678685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.419 [2024-07-26 23:04:35.678699] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.419 [2024-07-26 23:04:35.678713] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.419 [2024-07-26 23:04:35.678743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-26 23:04:35.688475] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.419 [2024-07-26 23:04:35.688616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.419 [2024-07-26 23:04:35.688643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.419 [2024-07-26 23:04:35.688657] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.419 [2024-07-26 23:04:35.688670] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.419 [2024-07-26 23:04:35.688700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-26 23:04:35.698515] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.419 [2024-07-26 23:04:35.698657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.419 [2024-07-26 23:04:35.698683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.419 [2024-07-26 23:04:35.698697] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.419 [2024-07-26 23:04:35.698710] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.419 [2024-07-26 23:04:35.698740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-26 23:04:35.708545] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.419 [2024-07-26 23:04:35.708719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.419 [2024-07-26 23:04:35.708745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.419 [2024-07-26 23:04:35.708759] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.419 [2024-07-26 23:04:35.708772] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.419 [2024-07-26 23:04:35.708804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-26 23:04:35.718576] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.419 [2024-07-26 23:04:35.718752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.419 [2024-07-26 23:04:35.718777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.419 [2024-07-26 23:04:35.718792] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.420 [2024-07-26 23:04:35.718805] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.420 [2024-07-26 23:04:35.718834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-26 23:04:35.728660] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.420 [2024-07-26 23:04:35.728807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.420 [2024-07-26 23:04:35.728832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.420 [2024-07-26 23:04:35.728853] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.420 [2024-07-26 23:04:35.728866] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.420 [2024-07-26 23:04:35.728896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-26 23:04:35.738610] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.420 [2024-07-26 23:04:35.738794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.420 [2024-07-26 23:04:35.738820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.420 [2024-07-26 23:04:35.738834] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.420 [2024-07-26 23:04:35.738848] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.420 [2024-07-26 23:04:35.738877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-26 23:04:35.748635] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.420 [2024-07-26 23:04:35.748773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.420 [2024-07-26 23:04:35.748799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.420 [2024-07-26 23:04:35.748813] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.420 [2024-07-26 23:04:35.748827] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.420 [2024-07-26 23:04:35.748857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-26 23:04:35.758710] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.420 [2024-07-26 23:04:35.758862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.420 [2024-07-26 23:04:35.758888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.420 [2024-07-26 23:04:35.758902] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.420 [2024-07-26 23:04:35.758915] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.420 [2024-07-26 23:04:35.758944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-26 23:04:35.768785] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.420 [2024-07-26 23:04:35.768930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.420 [2024-07-26 23:04:35.768956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.420 [2024-07-26 23:04:35.768970] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.420 [2024-07-26 23:04:35.768984] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.420 [2024-07-26 23:04:35.769014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-26 23:04:35.778744] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.420 [2024-07-26 23:04:35.778881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.420 [2024-07-26 23:04:35.778908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.420 [2024-07-26 23:04:35.778923] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.420 [2024-07-26 23:04:35.778936] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.420 [2024-07-26 23:04:35.778965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-26 23:04:35.788780] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.420 [2024-07-26 23:04:35.788924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.420 [2024-07-26 23:04:35.788950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.420 [2024-07-26 23:04:35.788965] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.420 [2024-07-26 23:04:35.788978] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.420 [2024-07-26 23:04:35.789007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-26 23:04:35.798866] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.420 [2024-07-26 23:04:35.799028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.420 [2024-07-26 23:04:35.799053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.420 [2024-07-26 23:04:35.799079] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.420 [2024-07-26 23:04:35.799093] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.420 [2024-07-26 23:04:35.799123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-26 23:04:35.808866] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.420 [2024-07-26 23:04:35.809065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.420 [2024-07-26 23:04:35.809092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.420 [2024-07-26 23:04:35.809106] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.420 [2024-07-26 23:04:35.809119] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.420 [2024-07-26 23:04:35.809148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-26 23:04:35.818850] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.420 [2024-07-26 23:04:35.818995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.420 [2024-07-26 23:04:35.819026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.420 [2024-07-26 23:04:35.819042] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.420 [2024-07-26 23:04:35.819055] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.420 [2024-07-26 23:04:35.819095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-26 23:04:35.828957] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.420 [2024-07-26 23:04:35.829104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.420 [2024-07-26 23:04:35.829131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.420 [2024-07-26 23:04:35.829145] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.420 [2024-07-26 23:04:35.829158] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.420 [2024-07-26 23:04:35.829201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-26 23:04:35.838929] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.420 [2024-07-26 23:04:35.839097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.420 [2024-07-26 23:04:35.839122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.420 [2024-07-26 23:04:35.839137] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.420 [2024-07-26 23:04:35.839150] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.420 [2024-07-26 23:04:35.839180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-26 23:04:35.848999] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.420 [2024-07-26 23:04:35.849149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.420 [2024-07-26 23:04:35.849175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.420 [2024-07-26 23:04:35.849189] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.420 [2024-07-26 23:04:35.849202] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.421 [2024-07-26 23:04:35.849232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-26 23:04:35.858981] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.421 [2024-07-26 23:04:35.859124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.421 [2024-07-26 23:04:35.859151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.421 [2024-07-26 23:04:35.859166] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.421 [2024-07-26 23:04:35.859180] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.421 [2024-07-26 23:04:35.859228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-26 23:04:35.868992] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.421 [2024-07-26 23:04:35.869132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.421 [2024-07-26 23:04:35.869159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.421 [2024-07-26 23:04:35.869173] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.421 [2024-07-26 23:04:35.869186] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.421 [2024-07-26 23:04:35.869217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-26 23:04:35.879038] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.421 [2024-07-26 23:04:35.879210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.421 [2024-07-26 23:04:35.879236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.421 [2024-07-26 23:04:35.879251] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.421 [2024-07-26 23:04:35.879264] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.421 [2024-07-26 23:04:35.879294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-26 23:04:35.889155] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.421 [2024-07-26 23:04:35.889300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.421 [2024-07-26 23:04:35.889327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.421 [2024-07-26 23:04:35.889347] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.421 [2024-07-26 23:04:35.889361] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.421 [2024-07-26 23:04:35.889393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-26 23:04:35.899110] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.421 [2024-07-26 23:04:35.899247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.421 [2024-07-26 23:04:35.899273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.421 [2024-07-26 23:04:35.899288] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.421 [2024-07-26 23:04:35.899301] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.421 [2024-07-26 23:04:35.899331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-26 23:04:35.909146] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.421 [2024-07-26 23:04:35.909296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.421 [2024-07-26 23:04:35.909327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.421 [2024-07-26 23:04:35.909342] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.421 [2024-07-26 23:04:35.909356] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.421 [2024-07-26 23:04:35.909385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-26 23:04:35.919146] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.421 [2024-07-26 23:04:35.919298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.421 [2024-07-26 23:04:35.919324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.421 [2024-07-26 23:04:35.919339] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.421 [2024-07-26 23:04:35.919352] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.421 [2024-07-26 23:04:35.919381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.680 [2024-07-26 23:04:35.929176] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.680 [2024-07-26 23:04:35.929338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.680 [2024-07-26 23:04:35.929364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.680 [2024-07-26 23:04:35.929378] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.680 [2024-07-26 23:04:35.929391] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.680 [2024-07-26 23:04:35.929421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.680 qpair failed and we were unable to recover it. 00:34:43.680 [2024-07-26 23:04:35.939202] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.680 [2024-07-26 23:04:35.939390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.680 [2024-07-26 23:04:35.939415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.680 [2024-07-26 23:04:35.939430] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.680 [2024-07-26 23:04:35.939443] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.680 [2024-07-26 23:04:35.939473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.680 qpair failed and we were unable to recover it. 00:34:43.680 [2024-07-26 23:04:35.949229] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.680 [2024-07-26 23:04:35.949371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.680 [2024-07-26 23:04:35.949397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.680 [2024-07-26 23:04:35.949411] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.680 [2024-07-26 23:04:35.949425] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.680 [2024-07-26 23:04:35.949461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.680 qpair failed and we were unable to recover it. 00:34:43.680 [2024-07-26 23:04:35.959368] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.680 [2024-07-26 23:04:35.959524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.680 [2024-07-26 23:04:35.959551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.680 [2024-07-26 23:04:35.959565] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.680 [2024-07-26 23:04:35.959578] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.680 [2024-07-26 23:04:35.959607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.680 qpair failed and we were unable to recover it. 00:34:43.680 [2024-07-26 23:04:35.969290] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.680 [2024-07-26 23:04:35.969435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.680 [2024-07-26 23:04:35.969461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.680 [2024-07-26 23:04:35.969476] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.680 [2024-07-26 23:04:35.969489] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.680 [2024-07-26 23:04:35.969520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.680 qpair failed and we were unable to recover it. 00:34:43.680 [2024-07-26 23:04:35.979305] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.680 [2024-07-26 23:04:35.979459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.680 [2024-07-26 23:04:35.979485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.680 [2024-07-26 23:04:35.979499] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.680 [2024-07-26 23:04:35.979513] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.681 [2024-07-26 23:04:35.979552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.681 qpair failed and we were unable to recover it. 00:34:43.681 [2024-07-26 23:04:35.989368] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.681 [2024-07-26 23:04:35.989522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.681 [2024-07-26 23:04:35.989548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.681 [2024-07-26 23:04:35.989562] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.681 [2024-07-26 23:04:35.989576] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.681 [2024-07-26 23:04:35.989605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.681 qpair failed and we were unable to recover it. 00:34:43.681 [2024-07-26 23:04:35.999429] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.681 [2024-07-26 23:04:35.999581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.681 [2024-07-26 23:04:35.999607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.681 [2024-07-26 23:04:35.999621] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.681 [2024-07-26 23:04:35.999634] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.681 [2024-07-26 23:04:35.999663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.681 qpair failed and we were unable to recover it. 00:34:43.681 [2024-07-26 23:04:36.009385] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.681 [2024-07-26 23:04:36.009535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.681 [2024-07-26 23:04:36.009562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.681 [2024-07-26 23:04:36.009576] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.681 [2024-07-26 23:04:36.009590] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.681 [2024-07-26 23:04:36.009619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.681 qpair failed and we were unable to recover it. 00:34:43.681 [2024-07-26 23:04:36.019477] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.681 [2024-07-26 23:04:36.019617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.681 [2024-07-26 23:04:36.019653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.681 [2024-07-26 23:04:36.019668] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.681 [2024-07-26 23:04:36.019681] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.681 [2024-07-26 23:04:36.019710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.681 qpair failed and we were unable to recover it. 00:34:43.681 [2024-07-26 23:04:36.029495] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.681 [2024-07-26 23:04:36.029641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.681 [2024-07-26 23:04:36.029667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.681 [2024-07-26 23:04:36.029681] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.681 [2024-07-26 23:04:36.029694] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.681 [2024-07-26 23:04:36.029724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.681 qpair failed and we were unable to recover it. 00:34:43.681 [2024-07-26 23:04:36.039503] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.681 [2024-07-26 23:04:36.039649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.681 [2024-07-26 23:04:36.039675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.681 [2024-07-26 23:04:36.039690] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.681 [2024-07-26 23:04:36.039708] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.681 [2024-07-26 23:04:36.039738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.681 qpair failed and we were unable to recover it. 00:34:43.681 [2024-07-26 23:04:36.049516] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.681 [2024-07-26 23:04:36.049670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.681 [2024-07-26 23:04:36.049695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.681 [2024-07-26 23:04:36.049710] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.681 [2024-07-26 23:04:36.049723] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.681 [2024-07-26 23:04:36.049754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.681 qpair failed and we were unable to recover it. 00:34:43.681 [2024-07-26 23:04:36.059578] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.681 [2024-07-26 23:04:36.059718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.681 [2024-07-26 23:04:36.059744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.681 [2024-07-26 23:04:36.059758] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.681 [2024-07-26 23:04:36.059771] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.681 [2024-07-26 23:04:36.059801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.681 qpair failed and we were unable to recover it. 00:34:43.681 [2024-07-26 23:04:36.069582] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.681 [2024-07-26 23:04:36.069769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.681 [2024-07-26 23:04:36.069794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.681 [2024-07-26 23:04:36.069808] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.681 [2024-07-26 23:04:36.069822] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.681 [2024-07-26 23:04:36.069851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.681 qpair failed and we were unable to recover it. 00:34:43.681 [2024-07-26 23:04:36.079674] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.681 [2024-07-26 23:04:36.079848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.681 [2024-07-26 23:04:36.079873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.681 [2024-07-26 23:04:36.079886] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.681 [2024-07-26 23:04:36.079899] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.681 [2024-07-26 23:04:36.079928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.681 qpair failed and we were unable to recover it. 00:34:43.681 [2024-07-26 23:04:36.089675] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.681 [2024-07-26 23:04:36.089831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.681 [2024-07-26 23:04:36.089859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.681 [2024-07-26 23:04:36.089878] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.681 [2024-07-26 23:04:36.089892] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.681 [2024-07-26 23:04:36.089923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.681 qpair failed and we were unable to recover it. 00:34:43.681 [2024-07-26 23:04:36.099724] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.681 [2024-07-26 23:04:36.099876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.681 [2024-07-26 23:04:36.099903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.681 [2024-07-26 23:04:36.099917] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.681 [2024-07-26 23:04:36.099930] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.681 [2024-07-26 23:04:36.099961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.681 qpair failed and we were unable to recover it. 00:34:43.681 [2024-07-26 23:04:36.109665] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.681 [2024-07-26 23:04:36.109811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.681 [2024-07-26 23:04:36.109837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.681 [2024-07-26 23:04:36.109852] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.682 [2024-07-26 23:04:36.109865] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.682 [2024-07-26 23:04:36.109896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.682 qpair failed and we were unable to recover it. 00:34:43.682 [2024-07-26 23:04:36.119702] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.682 [2024-07-26 23:04:36.119850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.682 [2024-07-26 23:04:36.119876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.682 [2024-07-26 23:04:36.119890] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.682 [2024-07-26 23:04:36.119903] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.682 [2024-07-26 23:04:36.119934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.682 qpair failed and we were unable to recover it. 00:34:43.682 [2024-07-26 23:04:36.129728] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.682 [2024-07-26 23:04:36.129910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.682 [2024-07-26 23:04:36.129936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.682 [2024-07-26 23:04:36.129956] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.682 [2024-07-26 23:04:36.129970] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.682 [2024-07-26 23:04:36.130000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.682 qpair failed and we were unable to recover it. 00:34:43.682 [2024-07-26 23:04:36.139758] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.682 [2024-07-26 23:04:36.139919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.682 [2024-07-26 23:04:36.139945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.682 [2024-07-26 23:04:36.139959] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.682 [2024-07-26 23:04:36.139972] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.682 [2024-07-26 23:04:36.140001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.682 qpair failed and we were unable to recover it. 00:34:43.682 [2024-07-26 23:04:36.149885] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.682 [2024-07-26 23:04:36.150075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.682 [2024-07-26 23:04:36.150102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.682 [2024-07-26 23:04:36.150115] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.682 [2024-07-26 23:04:36.150129] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.682 [2024-07-26 23:04:36.150158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.682 qpair failed and we were unable to recover it. 00:34:43.682 [2024-07-26 23:04:36.159857] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.682 [2024-07-26 23:04:36.160007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.682 [2024-07-26 23:04:36.160033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.682 [2024-07-26 23:04:36.160047] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.682 [2024-07-26 23:04:36.160067] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.682 [2024-07-26 23:04:36.160099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.682 qpair failed and we were unable to recover it. 00:34:43.682 [2024-07-26 23:04:36.169889] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.682 [2024-07-26 23:04:36.170067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.682 [2024-07-26 23:04:36.170093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.682 [2024-07-26 23:04:36.170107] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.682 [2024-07-26 23:04:36.170120] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.682 [2024-07-26 23:04:36.170150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.682 qpair failed and we were unable to recover it. 00:34:43.682 [2024-07-26 23:04:36.179926] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.682 [2024-07-26 23:04:36.180078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.682 [2024-07-26 23:04:36.180104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.682 [2024-07-26 23:04:36.180118] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.682 [2024-07-26 23:04:36.180131] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.682 [2024-07-26 23:04:36.180161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.682 qpair failed and we were unable to recover it. 00:34:43.941 [2024-07-26 23:04:36.189932] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.941 [2024-07-26 23:04:36.190075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.941 [2024-07-26 23:04:36.190101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.941 [2024-07-26 23:04:36.190116] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.941 [2024-07-26 23:04:36.190129] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.941 [2024-07-26 23:04:36.190171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.941 qpair failed and we were unable to recover it. 00:34:43.941 [2024-07-26 23:04:36.199934] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.941 [2024-07-26 23:04:36.200087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.941 [2024-07-26 23:04:36.200113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.941 [2024-07-26 23:04:36.200127] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.941 [2024-07-26 23:04:36.200141] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.941 [2024-07-26 23:04:36.200170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.941 qpair failed and we were unable to recover it. 00:34:43.941 [2024-07-26 23:04:36.209989] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.941 [2024-07-26 23:04:36.210159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.941 [2024-07-26 23:04:36.210185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.941 [2024-07-26 23:04:36.210199] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.941 [2024-07-26 23:04:36.210213] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.941 [2024-07-26 23:04:36.210244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.941 qpair failed and we were unable to recover it. 00:34:43.941 [2024-07-26 23:04:36.220033] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.941 [2024-07-26 23:04:36.220205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.941 [2024-07-26 23:04:36.220236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.941 [2024-07-26 23:04:36.220252] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.941 [2024-07-26 23:04:36.220265] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.941 [2024-07-26 23:04:36.220295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.941 qpair failed and we were unable to recover it. 00:34:43.941 [2024-07-26 23:04:36.230008] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.941 [2024-07-26 23:04:36.230148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.941 [2024-07-26 23:04:36.230175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.941 [2024-07-26 23:04:36.230189] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.941 [2024-07-26 23:04:36.230202] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.941 [2024-07-26 23:04:36.230232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.941 qpair failed and we were unable to recover it. 00:34:43.941 [2024-07-26 23:04:36.240098] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.942 [2024-07-26 23:04:36.240257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.942 [2024-07-26 23:04:36.240283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.942 [2024-07-26 23:04:36.240297] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.942 [2024-07-26 23:04:36.240310] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.942 [2024-07-26 23:04:36.240341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.942 qpair failed and we were unable to recover it. 00:34:43.942 [2024-07-26 23:04:36.250108] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.942 [2024-07-26 23:04:36.250268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.942 [2024-07-26 23:04:36.250293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.942 [2024-07-26 23:04:36.250308] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.942 [2024-07-26 23:04:36.250321] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.942 [2024-07-26 23:04:36.250350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.942 qpair failed and we were unable to recover it. 00:34:43.942 [2024-07-26 23:04:36.260120] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.942 [2024-07-26 23:04:36.260261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.942 [2024-07-26 23:04:36.260287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.942 [2024-07-26 23:04:36.260301] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.942 [2024-07-26 23:04:36.260314] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.942 [2024-07-26 23:04:36.260350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.942 qpair failed and we were unable to recover it. 00:34:43.942 [2024-07-26 23:04:36.270158] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.942 [2024-07-26 23:04:36.270313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.942 [2024-07-26 23:04:36.270339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.942 [2024-07-26 23:04:36.270353] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.942 [2024-07-26 23:04:36.270366] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.942 [2024-07-26 23:04:36.270395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.942 qpair failed and we were unable to recover it. 00:34:43.942 [2024-07-26 23:04:36.280197] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.942 [2024-07-26 23:04:36.280343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.942 [2024-07-26 23:04:36.280369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.942 [2024-07-26 23:04:36.280384] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.942 [2024-07-26 23:04:36.280397] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.942 [2024-07-26 23:04:36.280428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.942 qpair failed and we were unable to recover it. 00:34:43.942 [2024-07-26 23:04:36.290202] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.942 [2024-07-26 23:04:36.290345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.942 [2024-07-26 23:04:36.290371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.942 [2024-07-26 23:04:36.290385] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.942 [2024-07-26 23:04:36.290399] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.942 [2024-07-26 23:04:36.290429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.942 qpair failed and we were unable to recover it. 00:34:43.942 [2024-07-26 23:04:36.300253] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.942 [2024-07-26 23:04:36.300393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.942 [2024-07-26 23:04:36.300418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.942 [2024-07-26 23:04:36.300432] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.942 [2024-07-26 23:04:36.300445] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.942 [2024-07-26 23:04:36.300475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.942 qpair failed and we were unable to recover it. 00:34:43.942 [2024-07-26 23:04:36.310261] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.942 [2024-07-26 23:04:36.310438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.942 [2024-07-26 23:04:36.310470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.942 [2024-07-26 23:04:36.310489] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.942 [2024-07-26 23:04:36.310503] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.942 [2024-07-26 23:04:36.310536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.942 qpair failed and we were unable to recover it. 00:34:43.942 [2024-07-26 23:04:36.320383] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.942 [2024-07-26 23:04:36.320528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.942 [2024-07-26 23:04:36.320554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.942 [2024-07-26 23:04:36.320568] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.942 [2024-07-26 23:04:36.320582] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.942 [2024-07-26 23:04:36.320628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.942 qpair failed and we were unable to recover it. 00:34:43.942 [2024-07-26 23:04:36.330356] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.942 [2024-07-26 23:04:36.330495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.942 [2024-07-26 23:04:36.330521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.942 [2024-07-26 23:04:36.330535] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.942 [2024-07-26 23:04:36.330548] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.942 [2024-07-26 23:04:36.330590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.942 qpair failed and we were unable to recover it. 00:34:43.942 [2024-07-26 23:04:36.340345] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.942 [2024-07-26 23:04:36.340487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.942 [2024-07-26 23:04:36.340512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.942 [2024-07-26 23:04:36.340526] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.942 [2024-07-26 23:04:36.340539] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.942 [2024-07-26 23:04:36.340569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.942 qpair failed and we were unable to recover it. 00:34:43.942 [2024-07-26 23:04:36.350443] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.942 [2024-07-26 23:04:36.350587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.942 [2024-07-26 23:04:36.350614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.942 [2024-07-26 23:04:36.350628] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.942 [2024-07-26 23:04:36.350641] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.942 [2024-07-26 23:04:36.350676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.942 qpair failed and we were unable to recover it. 00:34:43.942 [2024-07-26 23:04:36.360423] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.942 [2024-07-26 23:04:36.360585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.942 [2024-07-26 23:04:36.360611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.942 [2024-07-26 23:04:36.360625] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.942 [2024-07-26 23:04:36.360638] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.942 [2024-07-26 23:04:36.360667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.942 qpair failed and we were unable to recover it. 00:34:43.942 [2024-07-26 23:04:36.370437] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.942 [2024-07-26 23:04:36.370576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.943 [2024-07-26 23:04:36.370601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.943 [2024-07-26 23:04:36.370615] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.943 [2024-07-26 23:04:36.370628] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.943 [2024-07-26 23:04:36.370657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.943 qpair failed and we were unable to recover it. 00:34:43.943 [2024-07-26 23:04:36.380472] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.943 [2024-07-26 23:04:36.380650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.943 [2024-07-26 23:04:36.380676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.943 [2024-07-26 23:04:36.380690] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.943 [2024-07-26 23:04:36.380703] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.943 [2024-07-26 23:04:36.380733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.943 qpair failed and we were unable to recover it. 00:34:43.943 [2024-07-26 23:04:36.390505] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.943 [2024-07-26 23:04:36.390689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.943 [2024-07-26 23:04:36.390715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.943 [2024-07-26 23:04:36.390729] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.943 [2024-07-26 23:04:36.390742] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.943 [2024-07-26 23:04:36.390772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.943 qpair failed and we were unable to recover it. 00:34:43.943 [2024-07-26 23:04:36.400556] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.943 [2024-07-26 23:04:36.400703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.943 [2024-07-26 23:04:36.400737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.943 [2024-07-26 23:04:36.400753] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.943 [2024-07-26 23:04:36.400766] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.943 [2024-07-26 23:04:36.400796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.943 qpair failed and we were unable to recover it. 00:34:43.943 [2024-07-26 23:04:36.410656] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.943 [2024-07-26 23:04:36.410833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.943 [2024-07-26 23:04:36.410858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.943 [2024-07-26 23:04:36.410872] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.943 [2024-07-26 23:04:36.410885] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.943 [2024-07-26 23:04:36.410916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.943 qpair failed and we were unable to recover it. 00:34:43.943 [2024-07-26 23:04:36.420582] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.943 [2024-07-26 23:04:36.420727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.943 [2024-07-26 23:04:36.420754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.943 [2024-07-26 23:04:36.420768] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.943 [2024-07-26 23:04:36.420781] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.943 [2024-07-26 23:04:36.420810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.943 qpair failed and we were unable to recover it. 00:34:43.943 [2024-07-26 23:04:36.430612] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.943 [2024-07-26 23:04:36.430745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.943 [2024-07-26 23:04:36.430771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.943 [2024-07-26 23:04:36.430786] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.943 [2024-07-26 23:04:36.430798] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.943 [2024-07-26 23:04:36.430840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.943 qpair failed and we were unable to recover it. 00:34:43.943 [2024-07-26 23:04:36.440618] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.943 [2024-07-26 23:04:36.440765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.943 [2024-07-26 23:04:36.440791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.943 [2024-07-26 23:04:36.440805] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.943 [2024-07-26 23:04:36.440824] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:43.943 [2024-07-26 23:04:36.440856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.943 qpair failed and we were unable to recover it. 00:34:44.202 [2024-07-26 23:04:36.450652] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.202 [2024-07-26 23:04:36.450808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.202 [2024-07-26 23:04:36.450834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.202 [2024-07-26 23:04:36.450848] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.202 [2024-07-26 23:04:36.450861] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.202 [2024-07-26 23:04:36.450892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.202 qpair failed and we were unable to recover it. 00:34:44.202 [2024-07-26 23:04:36.460674] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.202 [2024-07-26 23:04:36.460811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.202 [2024-07-26 23:04:36.460838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.202 [2024-07-26 23:04:36.460852] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.202 [2024-07-26 23:04:36.460865] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.203 [2024-07-26 23:04:36.460894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.203 qpair failed and we were unable to recover it. 00:34:44.203 [2024-07-26 23:04:36.470747] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.203 [2024-07-26 23:04:36.470910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.203 [2024-07-26 23:04:36.470938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.203 [2024-07-26 23:04:36.470952] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.203 [2024-07-26 23:04:36.470966] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.203 [2024-07-26 23:04:36.470996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.203 qpair failed and we were unable to recover it. 00:34:44.203 [2024-07-26 23:04:36.480746] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.203 [2024-07-26 23:04:36.480892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.203 [2024-07-26 23:04:36.480918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.203 [2024-07-26 23:04:36.480933] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.203 [2024-07-26 23:04:36.480948] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.203 [2024-07-26 23:04:36.480978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.203 qpair failed and we were unable to recover it. 00:34:44.203 [2024-07-26 23:04:36.490773] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.203 [2024-07-26 23:04:36.490916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.203 [2024-07-26 23:04:36.490942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.203 [2024-07-26 23:04:36.490956] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.203 [2024-07-26 23:04:36.490969] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.203 [2024-07-26 23:04:36.490999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.203 qpair failed and we were unable to recover it. 00:34:44.203 [2024-07-26 23:04:36.500815] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.203 [2024-07-26 23:04:36.500959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.203 [2024-07-26 23:04:36.500984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.203 [2024-07-26 23:04:36.500998] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.203 [2024-07-26 23:04:36.501011] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.203 [2024-07-26 23:04:36.501042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.203 qpair failed and we were unable to recover it. 00:34:44.203 [2024-07-26 23:04:36.510881] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.203 [2024-07-26 23:04:36.511032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.203 [2024-07-26 23:04:36.511067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.203 [2024-07-26 23:04:36.511084] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.203 [2024-07-26 23:04:36.511098] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.203 [2024-07-26 23:04:36.511129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.203 qpair failed and we were unable to recover it. 00:34:44.203 [2024-07-26 23:04:36.520894] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.203 [2024-07-26 23:04:36.521034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.203 [2024-07-26 23:04:36.521065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.203 [2024-07-26 23:04:36.521082] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.203 [2024-07-26 23:04:36.521096] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.203 [2024-07-26 23:04:36.521126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.203 qpair failed and we were unable to recover it. 00:34:44.203 [2024-07-26 23:04:36.530884] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.203 [2024-07-26 23:04:36.531024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.203 [2024-07-26 23:04:36.531050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.203 [2024-07-26 23:04:36.531087] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.203 [2024-07-26 23:04:36.531102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.203 [2024-07-26 23:04:36.531132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.203 qpair failed and we were unable to recover it. 00:34:44.203 [2024-07-26 23:04:36.540973] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.203 [2024-07-26 23:04:36.541119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.203 [2024-07-26 23:04:36.541145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.203 [2024-07-26 23:04:36.541159] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.203 [2024-07-26 23:04:36.541172] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.203 [2024-07-26 23:04:36.541202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.203 qpair failed and we were unable to recover it. 00:34:44.203 [2024-07-26 23:04:36.550951] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.203 [2024-07-26 23:04:36.551110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.203 [2024-07-26 23:04:36.551136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.203 [2024-07-26 23:04:36.551150] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.203 [2024-07-26 23:04:36.551164] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.203 [2024-07-26 23:04:36.551193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.203 qpair failed and we were unable to recover it. 00:34:44.203 [2024-07-26 23:04:36.560984] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.203 [2024-07-26 23:04:36.561161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.203 [2024-07-26 23:04:36.561187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.203 [2024-07-26 23:04:36.561201] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.203 [2024-07-26 23:04:36.561214] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.203 [2024-07-26 23:04:36.561244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.203 qpair failed and we were unable to recover it. 00:34:44.203 [2024-07-26 23:04:36.571011] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.203 [2024-07-26 23:04:36.571147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.203 [2024-07-26 23:04:36.571173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.203 [2024-07-26 23:04:36.571187] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.203 [2024-07-26 23:04:36.571200] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.203 [2024-07-26 23:04:36.571243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.203 qpair failed and we were unable to recover it. 00:34:44.203 [2024-07-26 23:04:36.581047] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.203 [2024-07-26 23:04:36.581191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.203 [2024-07-26 23:04:36.581218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.203 [2024-07-26 23:04:36.581232] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.203 [2024-07-26 23:04:36.581246] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.203 [2024-07-26 23:04:36.581276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.203 qpair failed and we were unable to recover it. 00:34:44.203 [2024-07-26 23:04:36.591079] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.203 [2024-07-26 23:04:36.591248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.203 [2024-07-26 23:04:36.591274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.203 [2024-07-26 23:04:36.591288] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.203 [2024-07-26 23:04:36.591302] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.203 [2024-07-26 23:04:36.591331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.203 qpair failed and we were unable to recover it. 00:34:44.204 [2024-07-26 23:04:36.601179] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.204 [2024-07-26 23:04:36.601322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.204 [2024-07-26 23:04:36.601348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.204 [2024-07-26 23:04:36.601362] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.204 [2024-07-26 23:04:36.601375] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.204 [2024-07-26 23:04:36.601418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.204 qpair failed and we were unable to recover it. 00:34:44.204 [2024-07-26 23:04:36.611118] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.204 [2024-07-26 23:04:36.611262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.204 [2024-07-26 23:04:36.611288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.204 [2024-07-26 23:04:36.611302] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.204 [2024-07-26 23:04:36.611315] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.204 [2024-07-26 23:04:36.611347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.204 qpair failed and we were unable to recover it. 00:34:44.204 [2024-07-26 23:04:36.621157] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.204 [2024-07-26 23:04:36.621304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.204 [2024-07-26 23:04:36.621331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.204 [2024-07-26 23:04:36.621351] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.204 [2024-07-26 23:04:36.621365] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.204 [2024-07-26 23:04:36.621397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.204 qpair failed and we were unable to recover it. 00:34:44.204 [2024-07-26 23:04:36.631215] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.204 [2024-07-26 23:04:36.631356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.204 [2024-07-26 23:04:36.631382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.204 [2024-07-26 23:04:36.631396] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.204 [2024-07-26 23:04:36.631410] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.204 [2024-07-26 23:04:36.631440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.204 qpair failed and we were unable to recover it. 00:34:44.204 [2024-07-26 23:04:36.641249] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.204 [2024-07-26 23:04:36.641402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.204 [2024-07-26 23:04:36.641429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.204 [2024-07-26 23:04:36.641443] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.204 [2024-07-26 23:04:36.641456] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.204 [2024-07-26 23:04:36.641486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.204 qpair failed and we were unable to recover it. 00:34:44.204 [2024-07-26 23:04:36.651307] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.204 [2024-07-26 23:04:36.651479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.204 [2024-07-26 23:04:36.651505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.204 [2024-07-26 23:04:36.651519] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.204 [2024-07-26 23:04:36.651532] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.204 [2024-07-26 23:04:36.651562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.204 qpair failed and we were unable to recover it. 00:34:44.204 [2024-07-26 23:04:36.661319] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.204 [2024-07-26 23:04:36.661490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.204 [2024-07-26 23:04:36.661517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.204 [2024-07-26 23:04:36.661531] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.204 [2024-07-26 23:04:36.661544] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.204 [2024-07-26 23:04:36.661573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.204 qpair failed and we were unable to recover it. 00:34:44.204 [2024-07-26 23:04:36.671291] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.204 [2024-07-26 23:04:36.671436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.204 [2024-07-26 23:04:36.671462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.204 [2024-07-26 23:04:36.671476] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.204 [2024-07-26 23:04:36.671490] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.204 [2024-07-26 23:04:36.671520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.204 qpair failed and we were unable to recover it. 00:34:44.204 [2024-07-26 23:04:36.681372] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.204 [2024-07-26 23:04:36.681541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.204 [2024-07-26 23:04:36.681567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.204 [2024-07-26 23:04:36.681582] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.204 [2024-07-26 23:04:36.681595] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.204 [2024-07-26 23:04:36.681624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.204 qpair failed and we were unable to recover it. 00:34:44.204 [2024-07-26 23:04:36.691439] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.204 [2024-07-26 23:04:36.691613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.204 [2024-07-26 23:04:36.691640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.204 [2024-07-26 23:04:36.691654] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.204 [2024-07-26 23:04:36.691667] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.204 [2024-07-26 23:04:36.691709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.204 qpair failed and we were unable to recover it. 00:34:44.204 [2024-07-26 23:04:36.701411] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.204 [2024-07-26 23:04:36.701552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.204 [2024-07-26 23:04:36.701579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.204 [2024-07-26 23:04:36.701594] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.204 [2024-07-26 23:04:36.701607] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.204 [2024-07-26 23:04:36.701637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.204 qpair failed and we were unable to recover it. 00:34:44.465 [2024-07-26 23:04:36.711420] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.465 [2024-07-26 23:04:36.711561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.465 [2024-07-26 23:04:36.711592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.465 [2024-07-26 23:04:36.711608] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.465 [2024-07-26 23:04:36.711621] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.465 [2024-07-26 23:04:36.711654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.465 qpair failed and we were unable to recover it. 00:34:44.465 [2024-07-26 23:04:36.721468] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.465 [2024-07-26 23:04:36.721609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.465 [2024-07-26 23:04:36.721635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.465 [2024-07-26 23:04:36.721650] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.465 [2024-07-26 23:04:36.721663] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.465 [2024-07-26 23:04:36.721693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.465 qpair failed and we were unable to recover it. 00:34:44.465 [2024-07-26 23:04:36.731468] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.465 [2024-07-26 23:04:36.731606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.465 [2024-07-26 23:04:36.731633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.465 [2024-07-26 23:04:36.731648] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.465 [2024-07-26 23:04:36.731662] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.465 [2024-07-26 23:04:36.731695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.465 qpair failed and we were unable to recover it. 00:34:44.465 [2024-07-26 23:04:36.741503] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.465 [2024-07-26 23:04:36.741657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.465 [2024-07-26 23:04:36.741684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.465 [2024-07-26 23:04:36.741699] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.465 [2024-07-26 23:04:36.741721] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.465 [2024-07-26 23:04:36.741755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.465 qpair failed and we were unable to recover it. 00:34:44.465 [2024-07-26 23:04:36.751519] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.465 [2024-07-26 23:04:36.751670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.465 [2024-07-26 23:04:36.751697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.465 [2024-07-26 23:04:36.751711] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.465 [2024-07-26 23:04:36.751724] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.465 [2024-07-26 23:04:36.751760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.465 qpair failed and we were unable to recover it. 00:34:44.465 [2024-07-26 23:04:36.761554] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.465 [2024-07-26 23:04:36.761703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.465 [2024-07-26 23:04:36.761729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.465 [2024-07-26 23:04:36.761744] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.465 [2024-07-26 23:04:36.761758] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.465 [2024-07-26 23:04:36.761790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.465 qpair failed and we were unable to recover it. 00:34:44.465 [2024-07-26 23:04:36.771626] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.465 [2024-07-26 23:04:36.771776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.465 [2024-07-26 23:04:36.771803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.466 [2024-07-26 23:04:36.771822] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.466 [2024-07-26 23:04:36.771837] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.466 [2024-07-26 23:04:36.771870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.466 qpair failed and we were unable to recover it. 00:34:44.466 [2024-07-26 23:04:36.781620] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.466 [2024-07-26 23:04:36.781784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.466 [2024-07-26 23:04:36.781811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.466 [2024-07-26 23:04:36.781826] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.466 [2024-07-26 23:04:36.781839] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.466 [2024-07-26 23:04:36.781871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.466 qpair failed and we were unable to recover it. 00:34:44.466 [2024-07-26 23:04:36.791717] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.466 [2024-07-26 23:04:36.791906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.466 [2024-07-26 23:04:36.791932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.466 [2024-07-26 23:04:36.791947] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.466 [2024-07-26 23:04:36.791960] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.466 [2024-07-26 23:04:36.791991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.466 qpair failed and we were unable to recover it. 00:34:44.466 [2024-07-26 23:04:36.801689] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.466 [2024-07-26 23:04:36.801838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.466 [2024-07-26 23:04:36.801874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.466 [2024-07-26 23:04:36.801890] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.466 [2024-07-26 23:04:36.801910] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.466 [2024-07-26 23:04:36.801945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.466 qpair failed and we were unable to recover it. 00:34:44.466 [2024-07-26 23:04:36.811701] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.466 [2024-07-26 23:04:36.811844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.466 [2024-07-26 23:04:36.811871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.466 [2024-07-26 23:04:36.811886] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.466 [2024-07-26 23:04:36.811900] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.466 [2024-07-26 23:04:36.811940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.466 qpair failed and we were unable to recover it. 00:34:44.466 [2024-07-26 23:04:36.821855] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.466 [2024-07-26 23:04:36.822047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.466 [2024-07-26 23:04:36.822082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.466 [2024-07-26 23:04:36.822097] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.466 [2024-07-26 23:04:36.822110] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.466 [2024-07-26 23:04:36.822140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.466 qpair failed and we were unable to recover it. 00:34:44.466 [2024-07-26 23:04:36.831735] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.466 [2024-07-26 23:04:36.831874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.466 [2024-07-26 23:04:36.831900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.466 [2024-07-26 23:04:36.831915] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.466 [2024-07-26 23:04:36.831928] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.466 [2024-07-26 23:04:36.831958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.466 qpair failed and we were unable to recover it. 00:34:44.466 [2024-07-26 23:04:36.841776] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.466 [2024-07-26 23:04:36.841923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.466 [2024-07-26 23:04:36.841949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.466 [2024-07-26 23:04:36.841963] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.466 [2024-07-26 23:04:36.841982] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.466 [2024-07-26 23:04:36.842014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.466 qpair failed and we were unable to recover it. 00:34:44.466 [2024-07-26 23:04:36.851799] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.466 [2024-07-26 23:04:36.851939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.466 [2024-07-26 23:04:36.851965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.466 [2024-07-26 23:04:36.851979] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.466 [2024-07-26 23:04:36.851992] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.466 [2024-07-26 23:04:36.852023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.466 qpair failed and we were unable to recover it. 00:34:44.466 [2024-07-26 23:04:36.861867] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.466 [2024-07-26 23:04:36.862017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.466 [2024-07-26 23:04:36.862042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.466 [2024-07-26 23:04:36.862056] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.466 [2024-07-26 23:04:36.862080] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.466 [2024-07-26 23:04:36.862111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.466 qpair failed and we were unable to recover it. 00:34:44.466 [2024-07-26 23:04:36.871872] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.466 [2024-07-26 23:04:36.872010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.466 [2024-07-26 23:04:36.872035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.466 [2024-07-26 23:04:36.872049] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.466 [2024-07-26 23:04:36.872070] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.466 [2024-07-26 23:04:36.872102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.466 qpair failed and we were unable to recover it. 00:34:44.466 [2024-07-26 23:04:36.881939] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.466 [2024-07-26 23:04:36.882121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.466 [2024-07-26 23:04:36.882147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.466 [2024-07-26 23:04:36.882161] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.466 [2024-07-26 23:04:36.882174] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.466 [2024-07-26 23:04:36.882204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.466 qpair failed and we were unable to recover it. 00:34:44.466 [2024-07-26 23:04:36.891950] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.466 [2024-07-26 23:04:36.892105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.466 [2024-07-26 23:04:36.892130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.466 [2024-07-26 23:04:36.892145] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.466 [2024-07-26 23:04:36.892157] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.466 [2024-07-26 23:04:36.892188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.466 qpair failed and we were unable to recover it. 00:34:44.466 [2024-07-26 23:04:36.901963] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.466 [2024-07-26 23:04:36.902135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.466 [2024-07-26 23:04:36.902160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.466 [2024-07-26 23:04:36.902174] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.466 [2024-07-26 23:04:36.902187] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.467 [2024-07-26 23:04:36.902216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.467 qpair failed and we were unable to recover it. 00:34:44.467 [2024-07-26 23:04:36.911985] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.467 [2024-07-26 23:04:36.912148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.467 [2024-07-26 23:04:36.912173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.467 [2024-07-26 23:04:36.912187] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.467 [2024-07-26 23:04:36.912199] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.467 [2024-07-26 23:04:36.912229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.467 qpair failed and we were unable to recover it. 00:34:44.467 [2024-07-26 23:04:36.922046] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.467 [2024-07-26 23:04:36.922233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.467 [2024-07-26 23:04:36.922259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.467 [2024-07-26 23:04:36.922273] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.467 [2024-07-26 23:04:36.922286] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.467 [2024-07-26 23:04:36.922315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.467 qpair failed and we were unable to recover it. 00:34:44.467 [2024-07-26 23:04:36.932029] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.467 [2024-07-26 23:04:36.932180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.467 [2024-07-26 23:04:36.932206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.467 [2024-07-26 23:04:36.932226] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.467 [2024-07-26 23:04:36.932240] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.467 [2024-07-26 23:04:36.932270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.467 qpair failed and we were unable to recover it. 00:34:44.467 [2024-07-26 23:04:36.942042] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.467 [2024-07-26 23:04:36.942208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.467 [2024-07-26 23:04:36.942233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.467 [2024-07-26 23:04:36.942247] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.467 [2024-07-26 23:04:36.942260] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.467 [2024-07-26 23:04:36.942292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.467 qpair failed and we were unable to recover it. 00:34:44.467 [2024-07-26 23:04:36.952104] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.467 [2024-07-26 23:04:36.952283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.467 [2024-07-26 23:04:36.952308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.467 [2024-07-26 23:04:36.952322] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.467 [2024-07-26 23:04:36.952335] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.467 [2024-07-26 23:04:36.952365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.467 qpair failed and we were unable to recover it. 00:34:44.467 [2024-07-26 23:04:36.962126] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.467 [2024-07-26 23:04:36.962272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.467 [2024-07-26 23:04:36.962298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.467 [2024-07-26 23:04:36.962312] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.467 [2024-07-26 23:04:36.962325] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.467 [2024-07-26 23:04:36.962355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.467 qpair failed and we were unable to recover it. 00:34:44.727 [2024-07-26 23:04:36.972138] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.727 [2024-07-26 23:04:36.972286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.727 [2024-07-26 23:04:36.972313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.727 [2024-07-26 23:04:36.972328] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.727 [2024-07-26 23:04:36.972344] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.727 [2024-07-26 23:04:36.972376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-07-26 23:04:36.982300] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.727 [2024-07-26 23:04:36.982476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.727 [2024-07-26 23:04:36.982502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.727 [2024-07-26 23:04:36.982516] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.727 [2024-07-26 23:04:36.982530] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.727 [2024-07-26 23:04:36.982559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-07-26 23:04:36.992316] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.727 [2024-07-26 23:04:36.992487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.727 [2024-07-26 23:04:36.992512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.727 [2024-07-26 23:04:36.992527] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.727 [2024-07-26 23:04:36.992540] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.727 [2024-07-26 23:04:36.992581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-07-26 23:04:37.002244] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.728 [2024-07-26 23:04:37.002393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.728 [2024-07-26 23:04:37.002419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.728 [2024-07-26 23:04:37.002433] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.728 [2024-07-26 23:04:37.002447] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.728 [2024-07-26 23:04:37.002478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-07-26 23:04:37.012252] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.728 [2024-07-26 23:04:37.012401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.728 [2024-07-26 23:04:37.012427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.728 [2024-07-26 23:04:37.012441] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.728 [2024-07-26 23:04:37.012454] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.728 [2024-07-26 23:04:37.012483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-07-26 23:04:37.022273] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.728 [2024-07-26 23:04:37.022416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.728 [2024-07-26 23:04:37.022443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.728 [2024-07-26 23:04:37.022463] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.728 [2024-07-26 23:04:37.022477] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.728 [2024-07-26 23:04:37.022506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-07-26 23:04:37.032326] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.728 [2024-07-26 23:04:37.032467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.728 [2024-07-26 23:04:37.032493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.728 [2024-07-26 23:04:37.032507] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.728 [2024-07-26 23:04:37.032520] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.728 [2024-07-26 23:04:37.032551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-07-26 23:04:37.042395] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.728 [2024-07-26 23:04:37.042543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.728 [2024-07-26 23:04:37.042568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.728 [2024-07-26 23:04:37.042583] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.728 [2024-07-26 23:04:37.042595] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.728 [2024-07-26 23:04:37.042624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-07-26 23:04:37.052360] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.728 [2024-07-26 23:04:37.052501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.728 [2024-07-26 23:04:37.052527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.728 [2024-07-26 23:04:37.052541] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.728 [2024-07-26 23:04:37.052555] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.728 [2024-07-26 23:04:37.052584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-07-26 23:04:37.062428] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.728 [2024-07-26 23:04:37.062608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.728 [2024-07-26 23:04:37.062635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.728 [2024-07-26 23:04:37.062649] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.728 [2024-07-26 23:04:37.062662] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.728 [2024-07-26 23:04:37.062694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-07-26 23:04:37.072448] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.728 [2024-07-26 23:04:37.072621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.728 [2024-07-26 23:04:37.072646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.728 [2024-07-26 23:04:37.072661] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.728 [2024-07-26 23:04:37.072674] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.728 [2024-07-26 23:04:37.072705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-07-26 23:04:37.082508] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.728 [2024-07-26 23:04:37.082684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.728 [2024-07-26 23:04:37.082712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.728 [2024-07-26 23:04:37.082726] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.728 [2024-07-26 23:04:37.082738] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.728 [2024-07-26 23:04:37.082767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-07-26 23:04:37.092515] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.728 [2024-07-26 23:04:37.092667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.728 [2024-07-26 23:04:37.092694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.728 [2024-07-26 23:04:37.092709] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.728 [2024-07-26 23:04:37.092722] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.728 [2024-07-26 23:04:37.092752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-07-26 23:04:37.102633] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.728 [2024-07-26 23:04:37.102786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.728 [2024-07-26 23:04:37.102813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.728 [2024-07-26 23:04:37.102827] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.728 [2024-07-26 23:04:37.102840] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.728 [2024-07-26 23:04:37.102872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-07-26 23:04:37.112575] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.728 [2024-07-26 23:04:37.112759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.728 [2024-07-26 23:04:37.112792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.728 [2024-07-26 23:04:37.112809] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.728 [2024-07-26 23:04:37.112822] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.728 [2024-07-26 23:04:37.112853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-07-26 23:04:37.122604] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.728 [2024-07-26 23:04:37.122751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.728 [2024-07-26 23:04:37.122777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.728 [2024-07-26 23:04:37.122791] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.728 [2024-07-26 23:04:37.122805] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.728 [2024-07-26 23:04:37.122834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-07-26 23:04:37.132578] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.729 [2024-07-26 23:04:37.132735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.729 [2024-07-26 23:04:37.132761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.729 [2024-07-26 23:04:37.132775] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.729 [2024-07-26 23:04:37.132788] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.729 [2024-07-26 23:04:37.132817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-07-26 23:04:37.142649] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.729 [2024-07-26 23:04:37.142789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.729 [2024-07-26 23:04:37.142816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.729 [2024-07-26 23:04:37.142830] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.729 [2024-07-26 23:04:37.142843] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.729 [2024-07-26 23:04:37.142875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-07-26 23:04:37.152700] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.729 [2024-07-26 23:04:37.152904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.729 [2024-07-26 23:04:37.152931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.729 [2024-07-26 23:04:37.152951] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.729 [2024-07-26 23:04:37.152966] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.729 [2024-07-26 23:04:37.153002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-07-26 23:04:37.162717] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.729 [2024-07-26 23:04:37.162895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.729 [2024-07-26 23:04:37.162921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.729 [2024-07-26 23:04:37.162936] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.729 [2024-07-26 23:04:37.162950] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.729 [2024-07-26 23:04:37.162979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-07-26 23:04:37.172681] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.729 [2024-07-26 23:04:37.172823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.729 [2024-07-26 23:04:37.172849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.729 [2024-07-26 23:04:37.172863] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.729 [2024-07-26 23:04:37.172877] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.729 [2024-07-26 23:04:37.172907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-07-26 23:04:37.182824] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.729 [2024-07-26 23:04:37.182962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.729 [2024-07-26 23:04:37.182989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.729 [2024-07-26 23:04:37.183004] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.729 [2024-07-26 23:04:37.183017] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.729 [2024-07-26 23:04:37.183065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-07-26 23:04:37.192776] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.729 [2024-07-26 23:04:37.192933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.729 [2024-07-26 23:04:37.192959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.729 [2024-07-26 23:04:37.192973] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.729 [2024-07-26 23:04:37.192987] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.729 [2024-07-26 23:04:37.193017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-07-26 23:04:37.202817] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.729 [2024-07-26 23:04:37.202986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.729 [2024-07-26 23:04:37.203016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.729 [2024-07-26 23:04:37.203031] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.729 [2024-07-26 23:04:37.203044] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.729 [2024-07-26 23:04:37.203081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-07-26 23:04:37.212820] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.729 [2024-07-26 23:04:37.212968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.729 [2024-07-26 23:04:37.212994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.729 [2024-07-26 23:04:37.213008] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.729 [2024-07-26 23:04:37.213021] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.729 [2024-07-26 23:04:37.213051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-07-26 23:04:37.222865] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.729 [2024-07-26 23:04:37.223009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.729 [2024-07-26 23:04:37.223035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.729 [2024-07-26 23:04:37.223048] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.729 [2024-07-26 23:04:37.223070] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.729 [2024-07-26 23:04:37.223102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.990 [2024-07-26 23:04:37.232852] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.990 [2024-07-26 23:04:37.232993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.990 [2024-07-26 23:04:37.233019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.990 [2024-07-26 23:04:37.233033] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.991 [2024-07-26 23:04:37.233047] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.991 [2024-07-26 23:04:37.233085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.991 qpair failed and we were unable to recover it. 00:34:44.991 [2024-07-26 23:04:37.242909] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.991 [2024-07-26 23:04:37.243063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.991 [2024-07-26 23:04:37.243092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.991 [2024-07-26 23:04:37.243110] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.991 [2024-07-26 23:04:37.243129] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.991 [2024-07-26 23:04:37.243161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.991 qpair failed and we were unable to recover it. 00:34:44.991 [2024-07-26 23:04:37.252909] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.991 [2024-07-26 23:04:37.253056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.991 [2024-07-26 23:04:37.253089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.991 [2024-07-26 23:04:37.253104] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.991 [2024-07-26 23:04:37.253117] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.991 [2024-07-26 23:04:37.253147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.991 qpair failed and we were unable to recover it. 00:34:44.991 [2024-07-26 23:04:37.262945] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.991 [2024-07-26 23:04:37.263096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.991 [2024-07-26 23:04:37.263122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.991 [2024-07-26 23:04:37.263136] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.991 [2024-07-26 23:04:37.263150] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.991 [2024-07-26 23:04:37.263180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.991 qpair failed and we were unable to recover it. 00:34:44.991 [2024-07-26 23:04:37.273009] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.991 [2024-07-26 23:04:37.273164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.991 [2024-07-26 23:04:37.273191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.991 [2024-07-26 23:04:37.273205] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.991 [2024-07-26 23:04:37.273218] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.991 [2024-07-26 23:04:37.273248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.991 qpair failed and we were unable to recover it. 00:34:44.991 [2024-07-26 23:04:37.283031] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.991 [2024-07-26 23:04:37.283224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.991 [2024-07-26 23:04:37.283252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.991 [2024-07-26 23:04:37.283267] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.991 [2024-07-26 23:04:37.283283] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.991 [2024-07-26 23:04:37.283316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.991 qpair failed and we were unable to recover it. 00:34:44.991 [2024-07-26 23:04:37.293072] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.991 [2024-07-26 23:04:37.293222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.991 [2024-07-26 23:04:37.293249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.991 [2024-07-26 23:04:37.293263] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.991 [2024-07-26 23:04:37.293276] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.991 [2024-07-26 23:04:37.293306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.991 qpair failed and we were unable to recover it. 00:34:44.991 [2024-07-26 23:04:37.303075] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.991 [2024-07-26 23:04:37.303221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.991 [2024-07-26 23:04:37.303247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.991 [2024-07-26 23:04:37.303261] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.991 [2024-07-26 23:04:37.303275] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.991 [2024-07-26 23:04:37.303305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.991 qpair failed and we were unable to recover it. 00:34:44.991 [2024-07-26 23:04:37.313100] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.991 [2024-07-26 23:04:37.313247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.991 [2024-07-26 23:04:37.313274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.991 [2024-07-26 23:04:37.313293] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.991 [2024-07-26 23:04:37.313306] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.991 [2024-07-26 23:04:37.313338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.991 qpair failed and we were unable to recover it. 00:34:44.991 [2024-07-26 23:04:37.323207] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.991 [2024-07-26 23:04:37.323372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.991 [2024-07-26 23:04:37.323399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.991 [2024-07-26 23:04:37.323413] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.991 [2024-07-26 23:04:37.323426] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.991 [2024-07-26 23:04:37.323468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.991 qpair failed and we were unable to recover it. 00:34:44.991 [2024-07-26 23:04:37.333148] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.991 [2024-07-26 23:04:37.333293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.991 [2024-07-26 23:04:37.333319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.991 [2024-07-26 23:04:37.333333] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.991 [2024-07-26 23:04:37.333352] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.991 [2024-07-26 23:04:37.333384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.991 qpair failed and we were unable to recover it. 00:34:44.991 [2024-07-26 23:04:37.343265] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.991 [2024-07-26 23:04:37.343412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.991 [2024-07-26 23:04:37.343438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.991 [2024-07-26 23:04:37.343457] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.991 [2024-07-26 23:04:37.343471] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.991 [2024-07-26 23:04:37.343501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.991 qpair failed and we were unable to recover it. 00:34:44.991 [2024-07-26 23:04:37.353199] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.991 [2024-07-26 23:04:37.353341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.991 [2024-07-26 23:04:37.353367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.991 [2024-07-26 23:04:37.353381] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.991 [2024-07-26 23:04:37.353394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.991 [2024-07-26 23:04:37.353424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.991 qpair failed and we were unable to recover it. 00:34:44.991 [2024-07-26 23:04:37.363212] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.991 [2024-07-26 23:04:37.363355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.991 [2024-07-26 23:04:37.363381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.991 [2024-07-26 23:04:37.363395] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.991 [2024-07-26 23:04:37.363408] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.992 [2024-07-26 23:04:37.363437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.992 qpair failed and we were unable to recover it. 00:34:44.992 [2024-07-26 23:04:37.373278] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.992 [2024-07-26 23:04:37.373460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.992 [2024-07-26 23:04:37.373486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.992 [2024-07-26 23:04:37.373500] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.992 [2024-07-26 23:04:37.373513] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.992 [2024-07-26 23:04:37.373542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.992 qpair failed and we were unable to recover it. 00:34:44.992 [2024-07-26 23:04:37.383269] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.992 [2024-07-26 23:04:37.383422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.992 [2024-07-26 23:04:37.383448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.992 [2024-07-26 23:04:37.383462] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.992 [2024-07-26 23:04:37.383475] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.992 [2024-07-26 23:04:37.383506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.992 qpair failed and we were unable to recover it. 00:34:44.992 [2024-07-26 23:04:37.393305] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.992 [2024-07-26 23:04:37.393462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.992 [2024-07-26 23:04:37.393488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.992 [2024-07-26 23:04:37.393502] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.992 [2024-07-26 23:04:37.393515] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.992 [2024-07-26 23:04:37.393544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.992 qpair failed and we were unable to recover it. 00:34:44.992 [2024-07-26 23:04:37.403387] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.992 [2024-07-26 23:04:37.403540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.992 [2024-07-26 23:04:37.403578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.992 [2024-07-26 23:04:37.403593] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.992 [2024-07-26 23:04:37.403607] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.992 [2024-07-26 23:04:37.403649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.992 qpair failed and we were unable to recover it. 00:34:44.992 [2024-07-26 23:04:37.413384] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.992 [2024-07-26 23:04:37.413549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.992 [2024-07-26 23:04:37.413576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.992 [2024-07-26 23:04:37.413590] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.992 [2024-07-26 23:04:37.413607] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.992 [2024-07-26 23:04:37.413637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.992 qpair failed and we were unable to recover it. 00:34:44.992 [2024-07-26 23:04:37.423408] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.992 [2024-07-26 23:04:37.423547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.992 [2024-07-26 23:04:37.423573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.992 [2024-07-26 23:04:37.423596] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.992 [2024-07-26 23:04:37.423611] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.992 [2024-07-26 23:04:37.423641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.992 qpair failed and we were unable to recover it. 00:34:44.992 [2024-07-26 23:04:37.433441] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.992 [2024-07-26 23:04:37.433584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.992 [2024-07-26 23:04:37.433610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.992 [2024-07-26 23:04:37.433625] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.992 [2024-07-26 23:04:37.433638] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.992 [2024-07-26 23:04:37.433667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.992 qpair failed and we were unable to recover it. 00:34:44.992 [2024-07-26 23:04:37.443438] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.992 [2024-07-26 23:04:37.443585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.992 [2024-07-26 23:04:37.443610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.992 [2024-07-26 23:04:37.443624] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.992 [2024-07-26 23:04:37.443637] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.992 [2024-07-26 23:04:37.443668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.992 qpair failed and we were unable to recover it. 00:34:44.992 [2024-07-26 23:04:37.453534] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.992 [2024-07-26 23:04:37.453677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.992 [2024-07-26 23:04:37.453702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.992 [2024-07-26 23:04:37.453716] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.992 [2024-07-26 23:04:37.453729] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.992 [2024-07-26 23:04:37.453761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.992 qpair failed and we were unable to recover it. 00:34:44.992 [2024-07-26 23:04:37.463546] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.992 [2024-07-26 23:04:37.463685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.992 [2024-07-26 23:04:37.463711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.992 [2024-07-26 23:04:37.463726] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.992 [2024-07-26 23:04:37.463739] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.992 [2024-07-26 23:04:37.463781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.992 qpair failed and we were unable to recover it. 00:34:44.992 [2024-07-26 23:04:37.473568] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.992 [2024-07-26 23:04:37.473751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.992 [2024-07-26 23:04:37.473776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.992 [2024-07-26 23:04:37.473790] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.992 [2024-07-26 23:04:37.473804] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.992 [2024-07-26 23:04:37.473833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.992 qpair failed and we were unable to recover it. 00:34:44.992 [2024-07-26 23:04:37.483583] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.992 [2024-07-26 23:04:37.483725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.992 [2024-07-26 23:04:37.483751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.992 [2024-07-26 23:04:37.483766] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.992 [2024-07-26 23:04:37.483779] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:44.992 [2024-07-26 23:04:37.483808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.992 qpair failed and we were unable to recover it. 00:34:45.252 [2024-07-26 23:04:37.493609] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.252 [2024-07-26 23:04:37.493760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.252 [2024-07-26 23:04:37.493788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.252 [2024-07-26 23:04:37.493808] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.252 [2024-07-26 23:04:37.493822] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.252 [2024-07-26 23:04:37.493855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.252 qpair failed and we were unable to recover it. 00:34:45.252 [2024-07-26 23:04:37.503655] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.252 [2024-07-26 23:04:37.503832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.252 [2024-07-26 23:04:37.503859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.252 [2024-07-26 23:04:37.503873] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.252 [2024-07-26 23:04:37.503886] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.252 [2024-07-26 23:04:37.503916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.252 qpair failed and we were unable to recover it. 00:34:45.252 [2024-07-26 23:04:37.513627] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.252 [2024-07-26 23:04:37.513763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.252 [2024-07-26 23:04:37.513795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.252 [2024-07-26 23:04:37.513811] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.252 [2024-07-26 23:04:37.513824] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.252 [2024-07-26 23:04:37.513853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.252 qpair failed and we were unable to recover it. 00:34:45.252 [2024-07-26 23:04:37.523704] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.252 [2024-07-26 23:04:37.523854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.252 [2024-07-26 23:04:37.523882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.253 [2024-07-26 23:04:37.523901] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.253 [2024-07-26 23:04:37.523916] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.253 [2024-07-26 23:04:37.523947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.253 qpair failed and we were unable to recover it. 00:34:45.253 [2024-07-26 23:04:37.533703] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.253 [2024-07-26 23:04:37.533862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.253 [2024-07-26 23:04:37.533888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.253 [2024-07-26 23:04:37.533902] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.253 [2024-07-26 23:04:37.533915] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.253 [2024-07-26 23:04:37.533945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.253 qpair failed and we were unable to recover it. 00:34:45.253 [2024-07-26 23:04:37.543740] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.253 [2024-07-26 23:04:37.543877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.253 [2024-07-26 23:04:37.543903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.253 [2024-07-26 23:04:37.543917] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.253 [2024-07-26 23:04:37.543930] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.253 [2024-07-26 23:04:37.543961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.253 qpair failed and we were unable to recover it. 00:34:45.253 [2024-07-26 23:04:37.553790] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.253 [2024-07-26 23:04:37.553940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.253 [2024-07-26 23:04:37.553967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.253 [2024-07-26 23:04:37.553981] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.253 [2024-07-26 23:04:37.553994] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.253 [2024-07-26 23:04:37.554045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.253 qpair failed and we were unable to recover it. 00:34:45.253 [2024-07-26 23:04:37.563797] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.253 [2024-07-26 23:04:37.563955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.253 [2024-07-26 23:04:37.563981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.253 [2024-07-26 23:04:37.563995] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.253 [2024-07-26 23:04:37.564009] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.253 [2024-07-26 23:04:37.564038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.253 qpair failed and we were unable to recover it. 00:34:45.253 [2024-07-26 23:04:37.573835] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.253 [2024-07-26 23:04:37.573985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.253 [2024-07-26 23:04:37.574011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.253 [2024-07-26 23:04:37.574025] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.253 [2024-07-26 23:04:37.574038] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.253 [2024-07-26 23:04:37.574075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.253 qpair failed and we were unable to recover it. 00:34:45.253 [2024-07-26 23:04:37.583873] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.253 [2024-07-26 23:04:37.584008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.253 [2024-07-26 23:04:37.584034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.253 [2024-07-26 23:04:37.584048] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.253 [2024-07-26 23:04:37.584071] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.253 [2024-07-26 23:04:37.584103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.253 qpair failed and we were unable to recover it. 00:34:45.253 [2024-07-26 23:04:37.593960] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.253 [2024-07-26 23:04:37.594113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.253 [2024-07-26 23:04:37.594140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.253 [2024-07-26 23:04:37.594156] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.253 [2024-07-26 23:04:37.594170] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.253 [2024-07-26 23:04:37.594200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.253 qpair failed and we were unable to recover it. 00:34:45.253 [2024-07-26 23:04:37.603936] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.253 [2024-07-26 23:04:37.604089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.253 [2024-07-26 23:04:37.604120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.253 [2024-07-26 23:04:37.604135] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.253 [2024-07-26 23:04:37.604148] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.253 [2024-07-26 23:04:37.604178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.253 qpair failed and we were unable to recover it. 00:34:45.253 [2024-07-26 23:04:37.613950] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.253 [2024-07-26 23:04:37.614102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.253 [2024-07-26 23:04:37.614128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.253 [2024-07-26 23:04:37.614143] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.253 [2024-07-26 23:04:37.614156] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.253 [2024-07-26 23:04:37.614185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.253 qpair failed and we were unable to recover it. 00:34:45.253 [2024-07-26 23:04:37.623997] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.253 [2024-07-26 23:04:37.624146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.253 [2024-07-26 23:04:37.624171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.253 [2024-07-26 23:04:37.624185] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.253 [2024-07-26 23:04:37.624199] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.253 [2024-07-26 23:04:37.624229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.253 qpair failed and we were unable to recover it. 00:34:45.253 [2024-07-26 23:04:37.634018] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.253 [2024-07-26 23:04:37.634169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.253 [2024-07-26 23:04:37.634195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.253 [2024-07-26 23:04:37.634209] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.253 [2024-07-26 23:04:37.634222] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.253 [2024-07-26 23:04:37.634251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.253 qpair failed and we were unable to recover it. 00:34:45.253 [2024-07-26 23:04:37.644122] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.253 [2024-07-26 23:04:37.644268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.253 [2024-07-26 23:04:37.644294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.253 [2024-07-26 23:04:37.644309] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.253 [2024-07-26 23:04:37.644322] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.253 [2024-07-26 23:04:37.644369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.253 qpair failed and we were unable to recover it. 00:34:45.253 [2024-07-26 23:04:37.654082] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.253 [2024-07-26 23:04:37.654221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.253 [2024-07-26 23:04:37.654247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.253 [2024-07-26 23:04:37.654261] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.253 [2024-07-26 23:04:37.654274] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.254 [2024-07-26 23:04:37.654304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.254 qpair failed and we were unable to recover it. 00:34:45.254 [2024-07-26 23:04:37.664096] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.254 [2024-07-26 23:04:37.664243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.254 [2024-07-26 23:04:37.664268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.254 [2024-07-26 23:04:37.664283] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.254 [2024-07-26 23:04:37.664296] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.254 [2024-07-26 23:04:37.664325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.254 qpair failed and we were unable to recover it. 00:34:45.254 [2024-07-26 23:04:37.674220] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.254 [2024-07-26 23:04:37.674374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.254 [2024-07-26 23:04:37.674399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.254 [2024-07-26 23:04:37.674413] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.254 [2024-07-26 23:04:37.674427] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.254 [2024-07-26 23:04:37.674456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.254 qpair failed and we were unable to recover it. 00:34:45.254 [2024-07-26 23:04:37.684242] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.254 [2024-07-26 23:04:37.684389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.254 [2024-07-26 23:04:37.684418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.254 [2024-07-26 23:04:37.684435] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.254 [2024-07-26 23:04:37.684448] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.254 [2024-07-26 23:04:37.684492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.254 qpair failed and we were unable to recover it. 00:34:45.254 [2024-07-26 23:04:37.694239] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.254 [2024-07-26 23:04:37.694388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.254 [2024-07-26 23:04:37.694414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.254 [2024-07-26 23:04:37.694428] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.254 [2024-07-26 23:04:37.694441] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.254 [2024-07-26 23:04:37.694471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.254 qpair failed and we were unable to recover it. 00:34:45.254 [2024-07-26 23:04:37.704270] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.254 [2024-07-26 23:04:37.704427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.254 [2024-07-26 23:04:37.704453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.254 [2024-07-26 23:04:37.704467] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.254 [2024-07-26 23:04:37.704480] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.254 [2024-07-26 23:04:37.704510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.254 qpair failed and we were unable to recover it. 00:34:45.254 [2024-07-26 23:04:37.714238] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.254 [2024-07-26 23:04:37.714375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.254 [2024-07-26 23:04:37.714401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.254 [2024-07-26 23:04:37.714415] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.254 [2024-07-26 23:04:37.714428] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.254 [2024-07-26 23:04:37.714457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.254 qpair failed and we were unable to recover it. 00:34:45.254 [2024-07-26 23:04:37.724305] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.254 [2024-07-26 23:04:37.724447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.254 [2024-07-26 23:04:37.724472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.254 [2024-07-26 23:04:37.724487] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.254 [2024-07-26 23:04:37.724500] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.254 [2024-07-26 23:04:37.724528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.254 qpair failed and we were unable to recover it. 00:34:45.254 [2024-07-26 23:04:37.734303] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.254 [2024-07-26 23:04:37.734450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.254 [2024-07-26 23:04:37.734476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.254 [2024-07-26 23:04:37.734490] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.254 [2024-07-26 23:04:37.734508] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.254 [2024-07-26 23:04:37.734541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.254 qpair failed and we were unable to recover it. 00:34:45.254 [2024-07-26 23:04:37.744371] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.254 [2024-07-26 23:04:37.744527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.254 [2024-07-26 23:04:37.744553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.254 [2024-07-26 23:04:37.744567] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.254 [2024-07-26 23:04:37.744580] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.254 [2024-07-26 23:04:37.744611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.254 qpair failed and we were unable to recover it. 00:34:45.514 [2024-07-26 23:04:37.754384] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.514 [2024-07-26 23:04:37.754531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.514 [2024-07-26 23:04:37.754557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.514 [2024-07-26 23:04:37.754571] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.514 [2024-07-26 23:04:37.754584] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.514 [2024-07-26 23:04:37.754614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.514 qpair failed and we were unable to recover it. 00:34:45.514 [2024-07-26 23:04:37.764400] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.514 [2024-07-26 23:04:37.764562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.514 [2024-07-26 23:04:37.764588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.514 [2024-07-26 23:04:37.764602] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.514 [2024-07-26 23:04:37.764615] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.514 [2024-07-26 23:04:37.764644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.514 qpair failed and we were unable to recover it. 00:34:45.514 [2024-07-26 23:04:37.774426] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.514 [2024-07-26 23:04:37.774564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.514 [2024-07-26 23:04:37.774590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.514 [2024-07-26 23:04:37.774605] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.514 [2024-07-26 23:04:37.774618] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.514 [2024-07-26 23:04:37.774648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.514 qpair failed and we were unable to recover it. 00:34:45.514 [2024-07-26 23:04:37.784441] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.514 [2024-07-26 23:04:37.784631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.514 [2024-07-26 23:04:37.784656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.514 [2024-07-26 23:04:37.784671] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.514 [2024-07-26 23:04:37.784683] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.514 [2024-07-26 23:04:37.784715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.514 qpair failed and we were unable to recover it. 00:34:45.514 [2024-07-26 23:04:37.794467] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.514 [2024-07-26 23:04:37.794608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.514 [2024-07-26 23:04:37.794634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.514 [2024-07-26 23:04:37.794648] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.514 [2024-07-26 23:04:37.794661] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.514 [2024-07-26 23:04:37.794691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.514 qpair failed and we were unable to recover it. 00:34:45.514 [2024-07-26 23:04:37.804522] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.514 [2024-07-26 23:04:37.804671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.514 [2024-07-26 23:04:37.804696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.514 [2024-07-26 23:04:37.804711] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.514 [2024-07-26 23:04:37.804724] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.514 [2024-07-26 23:04:37.804754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.514 qpair failed and we were unable to recover it. 00:34:45.514 [2024-07-26 23:04:37.814515] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.514 [2024-07-26 23:04:37.814651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.514 [2024-07-26 23:04:37.814677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.514 [2024-07-26 23:04:37.814691] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.514 [2024-07-26 23:04:37.814704] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.514 [2024-07-26 23:04:37.814734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.514 qpair failed and we were unable to recover it. 00:34:45.514 [2024-07-26 23:04:37.824636] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.514 [2024-07-26 23:04:37.824773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.514 [2024-07-26 23:04:37.824807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.514 [2024-07-26 23:04:37.824828] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.514 [2024-07-26 23:04:37.824842] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.514 [2024-07-26 23:04:37.824885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.514 qpair failed and we were unable to recover it. 00:34:45.514 [2024-07-26 23:04:37.834576] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.514 [2024-07-26 23:04:37.834714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.514 [2024-07-26 23:04:37.834740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.514 [2024-07-26 23:04:37.834754] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.514 [2024-07-26 23:04:37.834767] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.514 [2024-07-26 23:04:37.834799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.514 qpair failed and we were unable to recover it. 00:34:45.514 [2024-07-26 23:04:37.844668] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.515 [2024-07-26 23:04:37.844843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.515 [2024-07-26 23:04:37.844869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.515 [2024-07-26 23:04:37.844884] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.515 [2024-07-26 23:04:37.844897] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.515 [2024-07-26 23:04:37.844929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.515 qpair failed and we were unable to recover it. 00:34:45.515 [2024-07-26 23:04:37.854698] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.515 [2024-07-26 23:04:37.854853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.515 [2024-07-26 23:04:37.854879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.515 [2024-07-26 23:04:37.854893] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.515 [2024-07-26 23:04:37.854907] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.515 [2024-07-26 23:04:37.854937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.515 qpair failed and we were unable to recover it. 00:34:45.515 [2024-07-26 23:04:37.864665] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.515 [2024-07-26 23:04:37.864804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.515 [2024-07-26 23:04:37.864831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.515 [2024-07-26 23:04:37.864845] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.515 [2024-07-26 23:04:37.864858] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.515 [2024-07-26 23:04:37.864889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.515 qpair failed and we were unable to recover it. 00:34:45.515 [2024-07-26 23:04:37.874713] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.515 [2024-07-26 23:04:37.874854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.515 [2024-07-26 23:04:37.874879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.515 [2024-07-26 23:04:37.874894] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.515 [2024-07-26 23:04:37.874907] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.515 [2024-07-26 23:04:37.874937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.515 qpair failed and we were unable to recover it. 00:34:45.515 [2024-07-26 23:04:37.884816] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.515 [2024-07-26 23:04:37.884970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.515 [2024-07-26 23:04:37.884997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.515 [2024-07-26 23:04:37.885012] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.515 [2024-07-26 23:04:37.885026] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.515 [2024-07-26 23:04:37.885080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.515 qpair failed and we were unable to recover it. 00:34:45.515 [2024-07-26 23:04:37.894748] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.515 [2024-07-26 23:04:37.894893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.515 [2024-07-26 23:04:37.894920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.515 [2024-07-26 23:04:37.894934] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.515 [2024-07-26 23:04:37.894947] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.515 [2024-07-26 23:04:37.894979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.515 qpair failed and we were unable to recover it. 00:34:45.515 [2024-07-26 23:04:37.904791] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.515 [2024-07-26 23:04:37.904928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.515 [2024-07-26 23:04:37.904954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.515 [2024-07-26 23:04:37.904969] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.515 [2024-07-26 23:04:37.904982] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.515 [2024-07-26 23:04:37.905013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.515 qpair failed and we were unable to recover it. 00:34:45.515 [2024-07-26 23:04:37.914802] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.515 [2024-07-26 23:04:37.914946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.515 [2024-07-26 23:04:37.914977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.515 [2024-07-26 23:04:37.914992] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.515 [2024-07-26 23:04:37.915006] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.515 [2024-07-26 23:04:37.915036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.515 qpair failed and we were unable to recover it. 00:34:45.515 [2024-07-26 23:04:37.924863] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.515 [2024-07-26 23:04:37.925009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.515 [2024-07-26 23:04:37.925035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.515 [2024-07-26 23:04:37.925049] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.515 [2024-07-26 23:04:37.925074] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.515 [2024-07-26 23:04:37.925106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.515 qpair failed and we were unable to recover it. 00:34:45.515 [2024-07-26 23:04:37.934860] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.515 [2024-07-26 23:04:37.935001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.515 [2024-07-26 23:04:37.935027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.515 [2024-07-26 23:04:37.935051] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.515 [2024-07-26 23:04:37.935072] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.515 [2024-07-26 23:04:37.935104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.515 qpair failed and we were unable to recover it. 00:34:45.515 [2024-07-26 23:04:37.944918] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.515 [2024-07-26 23:04:37.945069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.515 [2024-07-26 23:04:37.945094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.515 [2024-07-26 23:04:37.945108] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.515 [2024-07-26 23:04:37.945121] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.515 [2024-07-26 23:04:37.945151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.515 qpair failed and we were unable to recover it. 00:34:45.515 [2024-07-26 23:04:37.954973] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.515 [2024-07-26 23:04:37.955164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.515 [2024-07-26 23:04:37.955192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.515 [2024-07-26 23:04:37.955206] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.515 [2024-07-26 23:04:37.955219] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.515 [2024-07-26 23:04:37.955256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.515 qpair failed and we were unable to recover it. 00:34:45.515 [2024-07-26 23:04:37.964997] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.515 [2024-07-26 23:04:37.965154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.515 [2024-07-26 23:04:37.965182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.515 [2024-07-26 23:04:37.965197] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.515 [2024-07-26 23:04:37.965210] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.515 [2024-07-26 23:04:37.965254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.515 qpair failed and we were unable to recover it. 00:34:45.515 [2024-07-26 23:04:37.975003] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.515 [2024-07-26 23:04:37.975165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.516 [2024-07-26 23:04:37.975192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.516 [2024-07-26 23:04:37.975207] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.516 [2024-07-26 23:04:37.975221] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.516 [2024-07-26 23:04:37.975254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.516 qpair failed and we were unable to recover it. 00:34:45.516 [2024-07-26 23:04:37.984989] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.516 [2024-07-26 23:04:37.985170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.516 [2024-07-26 23:04:37.985196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.516 [2024-07-26 23:04:37.985210] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.516 [2024-07-26 23:04:37.985224] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.516 [2024-07-26 23:04:37.985254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.516 qpair failed and we were unable to recover it. 00:34:45.516 [2024-07-26 23:04:37.995019] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.516 [2024-07-26 23:04:37.995186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.516 [2024-07-26 23:04:37.995213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.516 [2024-07-26 23:04:37.995228] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.516 [2024-07-26 23:04:37.995241] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.516 [2024-07-26 23:04:37.995271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.516 qpair failed and we were unable to recover it. 00:34:45.516 [2024-07-26 23:04:38.005098] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.516 [2024-07-26 23:04:38.005242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.516 [2024-07-26 23:04:38.005273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.516 [2024-07-26 23:04:38.005287] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.516 [2024-07-26 23:04:38.005300] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.516 [2024-07-26 23:04:38.005330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.516 qpair failed and we were unable to recover it. 00:34:45.516 [2024-07-26 23:04:38.015099] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.516 [2024-07-26 23:04:38.015244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.516 [2024-07-26 23:04:38.015271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.516 [2024-07-26 23:04:38.015285] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.516 [2024-07-26 23:04:38.015298] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.516 [2024-07-26 23:04:38.015328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.516 qpair failed and we were unable to recover it. 00:34:45.775 [2024-07-26 23:04:38.025121] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.775 [2024-07-26 23:04:38.025256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.775 [2024-07-26 23:04:38.025282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.775 [2024-07-26 23:04:38.025297] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.775 [2024-07-26 23:04:38.025310] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.775 [2024-07-26 23:04:38.025341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.775 qpair failed and we were unable to recover it. 00:34:45.775 [2024-07-26 23:04:38.035176] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.775 [2024-07-26 23:04:38.035317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.775 [2024-07-26 23:04:38.035343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.775 [2024-07-26 23:04:38.035358] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.775 [2024-07-26 23:04:38.035371] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.775 [2024-07-26 23:04:38.035401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.775 qpair failed and we were unable to recover it. 00:34:45.775 [2024-07-26 23:04:38.045238] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.775 [2024-07-26 23:04:38.045404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.775 [2024-07-26 23:04:38.045430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.775 [2024-07-26 23:04:38.045444] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.775 [2024-07-26 23:04:38.045456] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.775 [2024-07-26 23:04:38.045490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.775 qpair failed and we were unable to recover it. 00:34:45.775 [2024-07-26 23:04:38.055192] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.775 [2024-07-26 23:04:38.055333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.775 [2024-07-26 23:04:38.055359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.775 [2024-07-26 23:04:38.055373] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.775 [2024-07-26 23:04:38.055386] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.775 [2024-07-26 23:04:38.055416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.775 qpair failed and we were unable to recover it. 00:34:45.775 [2024-07-26 23:04:38.065242] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.775 [2024-07-26 23:04:38.065385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.775 [2024-07-26 23:04:38.065411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.775 [2024-07-26 23:04:38.065425] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.775 [2024-07-26 23:04:38.065438] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.775 [2024-07-26 23:04:38.065467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.775 qpair failed and we were unable to recover it. 00:34:45.775 [2024-07-26 23:04:38.075274] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.775 [2024-07-26 23:04:38.075419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.775 [2024-07-26 23:04:38.075444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.775 [2024-07-26 23:04:38.075462] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.775 [2024-07-26 23:04:38.075475] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.775 [2024-07-26 23:04:38.075506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.775 qpair failed and we were unable to recover it. 00:34:45.775 [2024-07-26 23:04:38.085318] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.775 [2024-07-26 23:04:38.085504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.775 [2024-07-26 23:04:38.085529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.775 [2024-07-26 23:04:38.085542] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.775 [2024-07-26 23:04:38.085555] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.775 [2024-07-26 23:04:38.085583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.775 qpair failed and we were unable to recover it. 00:34:45.775 [2024-07-26 23:04:38.095344] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.775 [2024-07-26 23:04:38.095504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.775 [2024-07-26 23:04:38.095534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.776 [2024-07-26 23:04:38.095549] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.776 [2024-07-26 23:04:38.095562] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.776 [2024-07-26 23:04:38.095593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.776 qpair failed and we were unable to recover it. 00:34:45.776 [2024-07-26 23:04:38.105366] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.776 [2024-07-26 23:04:38.105530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.776 [2024-07-26 23:04:38.105556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.776 [2024-07-26 23:04:38.105570] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.776 [2024-07-26 23:04:38.105583] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.776 [2024-07-26 23:04:38.105615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.776 qpair failed and we were unable to recover it. 00:34:45.776 [2024-07-26 23:04:38.115376] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.776 [2024-07-26 23:04:38.115515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.776 [2024-07-26 23:04:38.115540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.776 [2024-07-26 23:04:38.115554] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.776 [2024-07-26 23:04:38.115567] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.776 [2024-07-26 23:04:38.115596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.776 qpair failed and we were unable to recover it. 00:34:45.776 [2024-07-26 23:04:38.125474] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.776 [2024-07-26 23:04:38.125666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.776 [2024-07-26 23:04:38.125691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.776 [2024-07-26 23:04:38.125706] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.776 [2024-07-26 23:04:38.125719] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.776 [2024-07-26 23:04:38.125762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.776 qpair failed and we were unable to recover it. 00:34:45.776 [2024-07-26 23:04:38.135410] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.776 [2024-07-26 23:04:38.135563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.776 [2024-07-26 23:04:38.135588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.776 [2024-07-26 23:04:38.135602] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.776 [2024-07-26 23:04:38.135621] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.776 [2024-07-26 23:04:38.135652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.776 qpair failed and we were unable to recover it. 00:34:45.776 [2024-07-26 23:04:38.145470] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.776 [2024-07-26 23:04:38.145606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.776 [2024-07-26 23:04:38.145632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.776 [2024-07-26 23:04:38.145645] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.776 [2024-07-26 23:04:38.145658] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.776 [2024-07-26 23:04:38.145687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.776 qpair failed and we were unable to recover it. 00:34:45.776 [2024-07-26 23:04:38.155511] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.776 [2024-07-26 23:04:38.155649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.776 [2024-07-26 23:04:38.155675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.776 [2024-07-26 23:04:38.155689] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.776 [2024-07-26 23:04:38.155702] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.776 [2024-07-26 23:04:38.155732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.776 qpair failed and we were unable to recover it. 00:34:45.776 [2024-07-26 23:04:38.165543] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.776 [2024-07-26 23:04:38.165683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.776 [2024-07-26 23:04:38.165708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.776 [2024-07-26 23:04:38.165722] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.776 [2024-07-26 23:04:38.165735] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.776 [2024-07-26 23:04:38.165767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.776 qpair failed and we were unable to recover it. 00:34:45.776 [2024-07-26 23:04:38.175526] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.776 [2024-07-26 23:04:38.175678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.776 [2024-07-26 23:04:38.175704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.776 [2024-07-26 23:04:38.175717] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.776 [2024-07-26 23:04:38.175730] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.776 [2024-07-26 23:04:38.175760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.776 qpair failed and we were unable to recover it. 00:34:45.776 [2024-07-26 23:04:38.185590] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.776 [2024-07-26 23:04:38.185735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.776 [2024-07-26 23:04:38.185760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.776 [2024-07-26 23:04:38.185774] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.776 [2024-07-26 23:04:38.185787] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.776 [2024-07-26 23:04:38.185816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.776 qpair failed and we were unable to recover it. 00:34:45.776 [2024-07-26 23:04:38.195703] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.776 [2024-07-26 23:04:38.195854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.776 [2024-07-26 23:04:38.195881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.776 [2024-07-26 23:04:38.195895] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.776 [2024-07-26 23:04:38.195909] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.776 [2024-07-26 23:04:38.195952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.776 qpair failed and we were unable to recover it. 00:34:45.776 [2024-07-26 23:04:38.205704] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.776 [2024-07-26 23:04:38.205887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.776 [2024-07-26 23:04:38.205913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.776 [2024-07-26 23:04:38.205928] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.776 [2024-07-26 23:04:38.205941] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.776 [2024-07-26 23:04:38.205972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.776 qpair failed and we were unable to recover it. 00:34:45.776 [2024-07-26 23:04:38.215682] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.776 [2024-07-26 23:04:38.215834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.776 [2024-07-26 23:04:38.215861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.776 [2024-07-26 23:04:38.215875] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.776 [2024-07-26 23:04:38.215888] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.776 [2024-07-26 23:04:38.215917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.776 qpair failed and we were unable to recover it. 00:34:45.776 [2024-07-26 23:04:38.225679] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.776 [2024-07-26 23:04:38.225823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.776 [2024-07-26 23:04:38.225850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.776 [2024-07-26 23:04:38.225871] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.776 [2024-07-26 23:04:38.225885] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.777 [2024-07-26 23:04:38.225916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.777 qpair failed and we were unable to recover it. 00:34:45.777 [2024-07-26 23:04:38.235715] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.777 [2024-07-26 23:04:38.235855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.777 [2024-07-26 23:04:38.235881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.777 [2024-07-26 23:04:38.235895] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.777 [2024-07-26 23:04:38.235908] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.777 [2024-07-26 23:04:38.235940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.777 qpair failed and we were unable to recover it. 00:34:45.777 [2024-07-26 23:04:38.245749] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.777 [2024-07-26 23:04:38.245905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.777 [2024-07-26 23:04:38.245930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.777 [2024-07-26 23:04:38.245945] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.777 [2024-07-26 23:04:38.245958] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.777 [2024-07-26 23:04:38.245989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.777 qpair failed and we were unable to recover it. 00:34:45.777 [2024-07-26 23:04:38.255773] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.777 [2024-07-26 23:04:38.255908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.777 [2024-07-26 23:04:38.255934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.777 [2024-07-26 23:04:38.255949] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.777 [2024-07-26 23:04:38.255962] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.777 [2024-07-26 23:04:38.255993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.777 qpair failed and we were unable to recover it. 00:34:45.777 [2024-07-26 23:04:38.265808] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.777 [2024-07-26 23:04:38.265944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.777 [2024-07-26 23:04:38.265970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.777 [2024-07-26 23:04:38.265984] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.777 [2024-07-26 23:04:38.265997] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.777 [2024-07-26 23:04:38.266026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.777 qpair failed and we were unable to recover it. 00:34:45.777 [2024-07-26 23:04:38.275893] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.777 [2024-07-26 23:04:38.276071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.777 [2024-07-26 23:04:38.276099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.777 [2024-07-26 23:04:38.276117] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.777 [2024-07-26 23:04:38.276131] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:45.777 [2024-07-26 23:04:38.276162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.777 qpair failed and we were unable to recover it. 00:34:46.036 [2024-07-26 23:04:38.285882] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.036 [2024-07-26 23:04:38.286034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.036 [2024-07-26 23:04:38.286081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.036 [2024-07-26 23:04:38.286097] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.036 [2024-07-26 23:04:38.286111] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.036 [2024-07-26 23:04:38.286143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.036 qpair failed and we were unable to recover it. 00:34:46.036 [2024-07-26 23:04:38.295894] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.036 [2024-07-26 23:04:38.296088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.036 [2024-07-26 23:04:38.296115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.036 [2024-07-26 23:04:38.296130] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.036 [2024-07-26 23:04:38.296147] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.036 [2024-07-26 23:04:38.296180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.036 qpair failed and we were unable to recover it. 00:34:46.036 [2024-07-26 23:04:38.305931] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.036 [2024-07-26 23:04:38.306083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.036 [2024-07-26 23:04:38.306110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.036 [2024-07-26 23:04:38.306124] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.036 [2024-07-26 23:04:38.306137] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.036 [2024-07-26 23:04:38.306169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.036 qpair failed and we were unable to recover it. 00:34:46.036 [2024-07-26 23:04:38.315944] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.036 [2024-07-26 23:04:38.316096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.036 [2024-07-26 23:04:38.316123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.036 [2024-07-26 23:04:38.316143] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.036 [2024-07-26 23:04:38.316157] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.036 [2024-07-26 23:04:38.316188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.036 qpair failed and we were unable to recover it. 00:34:46.036 [2024-07-26 23:04:38.326053] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.036 [2024-07-26 23:04:38.326205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.036 [2024-07-26 23:04:38.326231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.036 [2024-07-26 23:04:38.326245] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.036 [2024-07-26 23:04:38.326258] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.036 [2024-07-26 23:04:38.326300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.036 qpair failed and we were unable to recover it. 00:34:46.036 [2024-07-26 23:04:38.336011] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.036 [2024-07-26 23:04:38.336170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.036 [2024-07-26 23:04:38.336197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.036 [2024-07-26 23:04:38.336211] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.036 [2024-07-26 23:04:38.336224] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.036 [2024-07-26 23:04:38.336254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.036 qpair failed and we were unable to recover it. 00:34:46.036 [2024-07-26 23:04:38.346046] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.036 [2024-07-26 23:04:38.346220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.036 [2024-07-26 23:04:38.346245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.036 [2024-07-26 23:04:38.346259] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.036 [2024-07-26 23:04:38.346273] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.036 [2024-07-26 23:04:38.346304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.036 qpair failed and we were unable to recover it. 00:34:46.036 [2024-07-26 23:04:38.356056] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.036 [2024-07-26 23:04:38.356201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.036 [2024-07-26 23:04:38.356227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.036 [2024-07-26 23:04:38.356241] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.036 [2024-07-26 23:04:38.356254] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.037 [2024-07-26 23:04:38.356285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.037 qpair failed and we were unable to recover it. 00:34:46.037 [2024-07-26 23:04:38.366120] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.037 [2024-07-26 23:04:38.366263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.037 [2024-07-26 23:04:38.366290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.037 [2024-07-26 23:04:38.366304] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.037 [2024-07-26 23:04:38.366317] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.037 [2024-07-26 23:04:38.366360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.037 qpair failed and we were unable to recover it. 00:34:46.037 [2024-07-26 23:04:38.376116] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.037 [2024-07-26 23:04:38.376255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.037 [2024-07-26 23:04:38.376280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.037 [2024-07-26 23:04:38.376296] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.037 [2024-07-26 23:04:38.376310] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.037 [2024-07-26 23:04:38.376341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.037 qpair failed and we were unable to recover it. 00:34:46.037 [2024-07-26 23:04:38.386175] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.037 [2024-07-26 23:04:38.386329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.037 [2024-07-26 23:04:38.386355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.037 [2024-07-26 23:04:38.386369] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.037 [2024-07-26 23:04:38.386382] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.037 [2024-07-26 23:04:38.386412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.037 qpair failed and we were unable to recover it. 00:34:46.037 [2024-07-26 23:04:38.396244] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.037 [2024-07-26 23:04:38.396431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.037 [2024-07-26 23:04:38.396458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.037 [2024-07-26 23:04:38.396476] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.037 [2024-07-26 23:04:38.396490] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.037 [2024-07-26 23:04:38.396523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.037 qpair failed and we were unable to recover it. 00:34:46.037 [2024-07-26 23:04:38.406200] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.037 [2024-07-26 23:04:38.406346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.037 [2024-07-26 23:04:38.406377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.037 [2024-07-26 23:04:38.406393] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.037 [2024-07-26 23:04:38.406406] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.037 [2024-07-26 23:04:38.406436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.037 qpair failed and we were unable to recover it. 00:34:46.037 [2024-07-26 23:04:38.416304] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.037 [2024-07-26 23:04:38.416454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.037 [2024-07-26 23:04:38.416480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.037 [2024-07-26 23:04:38.416494] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.037 [2024-07-26 23:04:38.416507] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.037 [2024-07-26 23:04:38.416549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.037 qpair failed and we were unable to recover it. 00:34:46.037 [2024-07-26 23:04:38.426273] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.037 [2024-07-26 23:04:38.426412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.037 [2024-07-26 23:04:38.426438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.037 [2024-07-26 23:04:38.426452] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.037 [2024-07-26 23:04:38.426466] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.037 [2024-07-26 23:04:38.426496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.037 qpair failed and we were unable to recover it. 00:34:46.037 [2024-07-26 23:04:38.436271] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.037 [2024-07-26 23:04:38.436416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.037 [2024-07-26 23:04:38.436441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.037 [2024-07-26 23:04:38.436455] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.037 [2024-07-26 23:04:38.436468] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.037 [2024-07-26 23:04:38.436499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.037 qpair failed and we were unable to recover it. 00:34:46.037 [2024-07-26 23:04:38.446336] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.037 [2024-07-26 23:04:38.446529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.037 [2024-07-26 23:04:38.446555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.037 [2024-07-26 23:04:38.446569] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.037 [2024-07-26 23:04:38.446583] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.037 [2024-07-26 23:04:38.446619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.037 qpair failed and we were unable to recover it. 00:34:46.037 [2024-07-26 23:04:38.456387] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.037 [2024-07-26 23:04:38.456536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.037 [2024-07-26 23:04:38.456562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.037 [2024-07-26 23:04:38.456580] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.037 [2024-07-26 23:04:38.456593] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.037 [2024-07-26 23:04:38.456623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.037 qpair failed and we were unable to recover it. 00:34:46.037 [2024-07-26 23:04:38.466351] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.037 [2024-07-26 23:04:38.466500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.037 [2024-07-26 23:04:38.466525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.037 [2024-07-26 23:04:38.466539] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.037 [2024-07-26 23:04:38.466552] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.037 [2024-07-26 23:04:38.466582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.037 qpair failed and we were unable to recover it. 00:34:46.037 [2024-07-26 23:04:38.476384] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.037 [2024-07-26 23:04:38.476523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.037 [2024-07-26 23:04:38.476549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.037 [2024-07-26 23:04:38.476563] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.037 [2024-07-26 23:04:38.476576] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.037 [2024-07-26 23:04:38.476606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.037 qpair failed and we were unable to recover it. 00:34:46.037 [2024-07-26 23:04:38.486437] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.037 [2024-07-26 23:04:38.486579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.037 [2024-07-26 23:04:38.486604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.037 [2024-07-26 23:04:38.486619] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.037 [2024-07-26 23:04:38.486632] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.037 [2024-07-26 23:04:38.486661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.038 qpair failed and we were unable to recover it. 00:34:46.038 [2024-07-26 23:04:38.496513] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.038 [2024-07-26 23:04:38.496659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.038 [2024-07-26 23:04:38.496689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.038 [2024-07-26 23:04:38.496704] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.038 [2024-07-26 23:04:38.496717] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.038 [2024-07-26 23:04:38.496747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.038 qpair failed and we were unable to recover it. 00:34:46.038 [2024-07-26 23:04:38.506538] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.038 [2024-07-26 23:04:38.506679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.038 [2024-07-26 23:04:38.506705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.038 [2024-07-26 23:04:38.506719] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.038 [2024-07-26 23:04:38.506732] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.038 [2024-07-26 23:04:38.506761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.038 qpair failed and we were unable to recover it. 00:34:46.038 [2024-07-26 23:04:38.516543] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.038 [2024-07-26 23:04:38.516680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.038 [2024-07-26 23:04:38.516707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.038 [2024-07-26 23:04:38.516721] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.038 [2024-07-26 23:04:38.516734] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.038 [2024-07-26 23:04:38.516764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.038 qpair failed and we were unable to recover it. 00:34:46.038 [2024-07-26 23:04:38.526656] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.038 [2024-07-26 23:04:38.526810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.038 [2024-07-26 23:04:38.526836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.038 [2024-07-26 23:04:38.526850] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.038 [2024-07-26 23:04:38.526864] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.038 [2024-07-26 23:04:38.526906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.038 qpair failed and we were unable to recover it. 00:34:46.038 [2024-07-26 23:04:38.536584] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.038 [2024-07-26 23:04:38.536794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.038 [2024-07-26 23:04:38.536820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.038 [2024-07-26 23:04:38.536834] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.038 [2024-07-26 23:04:38.536852] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.038 [2024-07-26 23:04:38.536882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.038 qpair failed and we were unable to recover it. 00:34:46.299 [2024-07-26 23:04:38.546690] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.299 [2024-07-26 23:04:38.546853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.299 [2024-07-26 23:04:38.546879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.299 [2024-07-26 23:04:38.546893] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.299 [2024-07-26 23:04:38.546906] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.299 [2024-07-26 23:04:38.546947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.299 qpair failed and we were unable to recover it. 00:34:46.299 [2024-07-26 23:04:38.556606] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.299 [2024-07-26 23:04:38.556750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.299 [2024-07-26 23:04:38.556776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.299 [2024-07-26 23:04:38.556790] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.299 [2024-07-26 23:04:38.556803] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.299 [2024-07-26 23:04:38.556834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.299 qpair failed and we were unable to recover it. 00:34:46.299 [2024-07-26 23:04:38.566641] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.299 [2024-07-26 23:04:38.566810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.299 [2024-07-26 23:04:38.566836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.299 [2024-07-26 23:04:38.566850] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.299 [2024-07-26 23:04:38.566863] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.299 [2024-07-26 23:04:38.566895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.299 qpair failed and we were unable to recover it. 00:34:46.299 [2024-07-26 23:04:38.576735] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.299 [2024-07-26 23:04:38.576918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.299 [2024-07-26 23:04:38.576944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.299 [2024-07-26 23:04:38.576958] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.299 [2024-07-26 23:04:38.576971] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.299 [2024-07-26 23:04:38.577000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.299 qpair failed and we were unable to recover it. 00:34:46.299 [2024-07-26 23:04:38.586733] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.299 [2024-07-26 23:04:38.586906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.299 [2024-07-26 23:04:38.586932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.299 [2024-07-26 23:04:38.586946] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.300 [2024-07-26 23:04:38.586960] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.300 [2024-07-26 23:04:38.586989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.300 qpair failed and we were unable to recover it. 00:34:46.300 [2024-07-26 23:04:38.596747] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.300 [2024-07-26 23:04:38.596885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.300 [2024-07-26 23:04:38.596910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.300 [2024-07-26 23:04:38.596924] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.300 [2024-07-26 23:04:38.596937] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.300 [2024-07-26 23:04:38.596967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.300 qpair failed and we were unable to recover it. 00:34:46.300 [2024-07-26 23:04:38.606777] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.300 [2024-07-26 23:04:38.606921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.300 [2024-07-26 23:04:38.606947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.300 [2024-07-26 23:04:38.606961] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.300 [2024-07-26 23:04:38.606975] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.300 [2024-07-26 23:04:38.607005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.300 qpair failed and we were unable to recover it. 00:34:46.300 [2024-07-26 23:04:38.616848] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.300 [2024-07-26 23:04:38.617014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.300 [2024-07-26 23:04:38.617042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.300 [2024-07-26 23:04:38.617068] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.300 [2024-07-26 23:04:38.617088] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.300 [2024-07-26 23:04:38.617121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.300 qpair failed and we were unable to recover it. 00:34:46.300 [2024-07-26 23:04:38.626822] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.300 [2024-07-26 23:04:38.626962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.300 [2024-07-26 23:04:38.626988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.300 [2024-07-26 23:04:38.627007] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.300 [2024-07-26 23:04:38.627022] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.300 [2024-07-26 23:04:38.627053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.300 qpair failed and we were unable to recover it. 00:34:46.300 [2024-07-26 23:04:38.636923] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.300 [2024-07-26 23:04:38.637112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.300 [2024-07-26 23:04:38.637138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.300 [2024-07-26 23:04:38.637152] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.300 [2024-07-26 23:04:38.637165] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.300 [2024-07-26 23:04:38.637195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.300 qpair failed and we were unable to recover it. 00:34:46.300 [2024-07-26 23:04:38.646914] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.300 [2024-07-26 23:04:38.647084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.300 [2024-07-26 23:04:38.647111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.300 [2024-07-26 23:04:38.647125] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.300 [2024-07-26 23:04:38.647141] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.300 [2024-07-26 23:04:38.647172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.300 qpair failed and we were unable to recover it. 00:34:46.300 [2024-07-26 23:04:38.656946] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.300 [2024-07-26 23:04:38.657115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.300 [2024-07-26 23:04:38.657141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.300 [2024-07-26 23:04:38.657156] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.300 [2024-07-26 23:04:38.657171] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.300 [2024-07-26 23:04:38.657202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.300 qpair failed and we were unable to recover it. 00:34:46.300 [2024-07-26 23:04:38.666946] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.300 [2024-07-26 23:04:38.667100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.300 [2024-07-26 23:04:38.667132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.300 [2024-07-26 23:04:38.667146] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.300 [2024-07-26 23:04:38.667160] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.300 [2024-07-26 23:04:38.667190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.300 qpair failed and we were unable to recover it. 00:34:46.300 [2024-07-26 23:04:38.677005] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.300 [2024-07-26 23:04:38.677172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.300 [2024-07-26 23:04:38.677199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.300 [2024-07-26 23:04:38.677213] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.300 [2024-07-26 23:04:38.677226] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.300 [2024-07-26 23:04:38.677256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.300 qpair failed and we were unable to recover it. 00:34:46.300 [2024-07-26 23:04:38.687024] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.300 [2024-07-26 23:04:38.687180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.300 [2024-07-26 23:04:38.687206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.300 [2024-07-26 23:04:38.687220] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.300 [2024-07-26 23:04:38.687233] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.300 [2024-07-26 23:04:38.687264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.300 qpair failed and we were unable to recover it. 00:34:46.300 [2024-07-26 23:04:38.697053] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.300 [2024-07-26 23:04:38.697206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.300 [2024-07-26 23:04:38.697232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.300 [2024-07-26 23:04:38.697247] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.300 [2024-07-26 23:04:38.697259] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.300 [2024-07-26 23:04:38.697289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.300 qpair failed and we were unable to recover it. 00:34:46.300 [2024-07-26 23:04:38.707068] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.300 [2024-07-26 23:04:38.707223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.300 [2024-07-26 23:04:38.707250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.300 [2024-07-26 23:04:38.707264] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.300 [2024-07-26 23:04:38.707277] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.300 [2024-07-26 23:04:38.707307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.300 qpair failed and we were unable to recover it. 00:34:46.300 [2024-07-26 23:04:38.717099] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.300 [2024-07-26 23:04:38.717268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.300 [2024-07-26 23:04:38.717293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.300 [2024-07-26 23:04:38.717314] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.300 [2024-07-26 23:04:38.717328] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.300 [2024-07-26 23:04:38.717357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.301 qpair failed and we were unable to recover it. 00:34:46.301 [2024-07-26 23:04:38.727113] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.301 [2024-07-26 23:04:38.727261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.301 [2024-07-26 23:04:38.727286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.301 [2024-07-26 23:04:38.727300] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.301 [2024-07-26 23:04:38.727313] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.301 [2024-07-26 23:04:38.727343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.301 qpair failed and we were unable to recover it. 00:34:46.301 [2024-07-26 23:04:38.737148] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.301 [2024-07-26 23:04:38.737334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.301 [2024-07-26 23:04:38.737360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.301 [2024-07-26 23:04:38.737374] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.301 [2024-07-26 23:04:38.737387] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.301 [2024-07-26 23:04:38.737417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.301 qpair failed and we were unable to recover it. 00:34:46.301 [2024-07-26 23:04:38.747189] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.301 [2024-07-26 23:04:38.747330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.301 [2024-07-26 23:04:38.747356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.301 [2024-07-26 23:04:38.747370] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.301 [2024-07-26 23:04:38.747383] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.301 [2024-07-26 23:04:38.747412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.301 qpair failed and we were unable to recover it. 00:34:46.301 [2024-07-26 23:04:38.757286] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.301 [2024-07-26 23:04:38.757421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.301 [2024-07-26 23:04:38.757447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.301 [2024-07-26 23:04:38.757462] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.301 [2024-07-26 23:04:38.757475] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.301 [2024-07-26 23:04:38.757517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.301 qpair failed and we were unable to recover it. 00:34:46.301 [2024-07-26 23:04:38.767274] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.301 [2024-07-26 23:04:38.767428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.301 [2024-07-26 23:04:38.767454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.301 [2024-07-26 23:04:38.767469] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.301 [2024-07-26 23:04:38.767482] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.301 [2024-07-26 23:04:38.767512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.301 qpair failed and we were unable to recover it. 00:34:46.301 [2024-07-26 23:04:38.777265] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.301 [2024-07-26 23:04:38.777410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.301 [2024-07-26 23:04:38.777436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.301 [2024-07-26 23:04:38.777450] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.301 [2024-07-26 23:04:38.777464] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.301 [2024-07-26 23:04:38.777494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.301 qpair failed and we were unable to recover it. 00:34:46.301 [2024-07-26 23:04:38.787362] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.301 [2024-07-26 23:04:38.787501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.301 [2024-07-26 23:04:38.787528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.301 [2024-07-26 23:04:38.787542] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.301 [2024-07-26 23:04:38.787555] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.301 [2024-07-26 23:04:38.787598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.301 qpair failed and we were unable to recover it. 00:34:46.301 [2024-07-26 23:04:38.797313] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.301 [2024-07-26 23:04:38.797454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.301 [2024-07-26 23:04:38.797479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.301 [2024-07-26 23:04:38.797494] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.301 [2024-07-26 23:04:38.797506] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.301 [2024-07-26 23:04:38.797538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.301 qpair failed and we were unable to recover it. 00:34:46.561 [2024-07-26 23:04:38.807369] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.561 [2024-07-26 23:04:38.807518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.561 [2024-07-26 23:04:38.807549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.561 [2024-07-26 23:04:38.807564] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.561 [2024-07-26 23:04:38.807577] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.561 [2024-07-26 23:04:38.807619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.561 qpair failed and we were unable to recover it. 00:34:46.561 [2024-07-26 23:04:38.817370] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.561 [2024-07-26 23:04:38.817527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.561 [2024-07-26 23:04:38.817554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.561 [2024-07-26 23:04:38.817568] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.561 [2024-07-26 23:04:38.817581] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.561 [2024-07-26 23:04:38.817611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.561 qpair failed and we were unable to recover it. 00:34:46.561 [2024-07-26 23:04:38.827413] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.561 [2024-07-26 23:04:38.827559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.561 [2024-07-26 23:04:38.827585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.561 [2024-07-26 23:04:38.827603] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.561 [2024-07-26 23:04:38.827617] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.561 [2024-07-26 23:04:38.827647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.561 qpair failed and we were unable to recover it. 00:34:46.561 [2024-07-26 23:04:38.837418] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.562 [2024-07-26 23:04:38.837560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.562 [2024-07-26 23:04:38.837585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.562 [2024-07-26 23:04:38.837599] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.562 [2024-07-26 23:04:38.837613] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.562 [2024-07-26 23:04:38.837642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.562 qpair failed and we were unable to recover it. 00:34:46.562 [2024-07-26 23:04:38.847463] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.562 [2024-07-26 23:04:38.847609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.562 [2024-07-26 23:04:38.847635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.562 [2024-07-26 23:04:38.847649] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.562 [2024-07-26 23:04:38.847662] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.562 [2024-07-26 23:04:38.847697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.562 qpair failed and we were unable to recover it. 00:34:46.562 [2024-07-26 23:04:38.857515] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.562 [2024-07-26 23:04:38.857656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.562 [2024-07-26 23:04:38.857681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.562 [2024-07-26 23:04:38.857696] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.562 [2024-07-26 23:04:38.857709] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.562 [2024-07-26 23:04:38.857738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.562 qpair failed and we were unable to recover it. 00:34:46.562 [2024-07-26 23:04:38.867542] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.562 [2024-07-26 23:04:38.867719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.562 [2024-07-26 23:04:38.867745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.562 [2024-07-26 23:04:38.867760] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.562 [2024-07-26 23:04:38.867775] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.562 [2024-07-26 23:04:38.867805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.562 qpair failed and we were unable to recover it. 00:34:46.562 [2024-07-26 23:04:38.877571] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.562 [2024-07-26 23:04:38.877707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.562 [2024-07-26 23:04:38.877732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.562 [2024-07-26 23:04:38.877747] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.562 [2024-07-26 23:04:38.877760] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.562 [2024-07-26 23:04:38.877789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.562 qpair failed and we were unable to recover it. 00:34:46.562 [2024-07-26 23:04:38.887588] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.562 [2024-07-26 23:04:38.887739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.562 [2024-07-26 23:04:38.887765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.562 [2024-07-26 23:04:38.887780] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.562 [2024-07-26 23:04:38.887793] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.562 [2024-07-26 23:04:38.887836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.562 qpair failed and we were unable to recover it. 00:34:46.562 [2024-07-26 23:04:38.897589] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.562 [2024-07-26 23:04:38.897754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.562 [2024-07-26 23:04:38.897784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.562 [2024-07-26 23:04:38.897799] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.562 [2024-07-26 23:04:38.897814] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.562 [2024-07-26 23:04:38.897845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.562 qpair failed and we were unable to recover it. 00:34:46.562 [2024-07-26 23:04:38.907628] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.562 [2024-07-26 23:04:38.907774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.562 [2024-07-26 23:04:38.907800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.562 [2024-07-26 23:04:38.907814] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.562 [2024-07-26 23:04:38.907828] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.562 [2024-07-26 23:04:38.907857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.562 qpair failed and we were unable to recover it. 00:34:46.562 [2024-07-26 23:04:38.917679] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.562 [2024-07-26 23:04:38.917815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.562 [2024-07-26 23:04:38.917841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.562 [2024-07-26 23:04:38.917855] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.562 [2024-07-26 23:04:38.917868] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.562 [2024-07-26 23:04:38.917910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.562 qpair failed and we were unable to recover it. 00:34:46.562 [2024-07-26 23:04:38.927720] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.562 [2024-07-26 23:04:38.927863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.562 [2024-07-26 23:04:38.927889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.562 [2024-07-26 23:04:38.927903] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.562 [2024-07-26 23:04:38.927916] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.562 [2024-07-26 23:04:38.927947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.562 qpair failed and we were unable to recover it. 00:34:46.562 [2024-07-26 23:04:38.937762] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.562 [2024-07-26 23:04:38.937900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.562 [2024-07-26 23:04:38.937925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.562 [2024-07-26 23:04:38.937939] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.562 [2024-07-26 23:04:38.937957] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.562 [2024-07-26 23:04:38.938002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.562 qpair failed and we were unable to recover it. 00:34:46.562 [2024-07-26 23:04:38.947758] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.562 [2024-07-26 23:04:38.947901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.562 [2024-07-26 23:04:38.947927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.562 [2024-07-26 23:04:38.947941] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.562 [2024-07-26 23:04:38.947954] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.562 [2024-07-26 23:04:38.947984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.562 qpair failed and we were unable to recover it. 00:34:46.562 [2024-07-26 23:04:38.957782] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.562 [2024-07-26 23:04:38.957920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.562 [2024-07-26 23:04:38.957946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.562 [2024-07-26 23:04:38.957960] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.562 [2024-07-26 23:04:38.957973] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.562 [2024-07-26 23:04:38.958002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.562 qpair failed and we were unable to recover it. 00:34:46.562 [2024-07-26 23:04:38.967795] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.562 [2024-07-26 23:04:38.967937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.563 [2024-07-26 23:04:38.967963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.563 [2024-07-26 23:04:38.967977] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.563 [2024-07-26 23:04:38.967990] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.563 [2024-07-26 23:04:38.968019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.563 qpair failed and we were unable to recover it. 00:34:46.563 [2024-07-26 23:04:38.977866] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.563 [2024-07-26 23:04:38.978056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.563 [2024-07-26 23:04:38.978090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.563 [2024-07-26 23:04:38.978110] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.563 [2024-07-26 23:04:38.978123] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.563 [2024-07-26 23:04:38.978154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.563 qpair failed and we were unable to recover it. 00:34:46.563 [2024-07-26 23:04:38.987868] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.563 [2024-07-26 23:04:38.988029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.563 [2024-07-26 23:04:38.988054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.563 [2024-07-26 23:04:38.988075] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.563 [2024-07-26 23:04:38.988089] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.563 [2024-07-26 23:04:38.988119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.563 qpair failed and we were unable to recover it. 00:34:46.563 [2024-07-26 23:04:38.997898] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.563 [2024-07-26 23:04:38.998080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.563 [2024-07-26 23:04:38.998106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.563 [2024-07-26 23:04:38.998121] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.563 [2024-07-26 23:04:38.998134] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.563 [2024-07-26 23:04:38.998164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.563 qpair failed and we were unable to recover it. 00:34:46.563 [2024-07-26 23:04:39.007907] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.563 [2024-07-26 23:04:39.008070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.563 [2024-07-26 23:04:39.008097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.563 [2024-07-26 23:04:39.008111] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.563 [2024-07-26 23:04:39.008125] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.563 [2024-07-26 23:04:39.008155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.563 qpair failed and we were unable to recover it. 00:34:46.563 [2024-07-26 23:04:39.017949] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.563 [2024-07-26 23:04:39.018107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.563 [2024-07-26 23:04:39.018135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.563 [2024-07-26 23:04:39.018153] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.563 [2024-07-26 23:04:39.018166] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.563 [2024-07-26 23:04:39.018198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.563 qpair failed and we were unable to recover it. 00:34:46.563 [2024-07-26 23:04:39.027975] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.563 [2024-07-26 23:04:39.028146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.563 [2024-07-26 23:04:39.028175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.563 [2024-07-26 23:04:39.028192] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.563 [2024-07-26 23:04:39.028214] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.563 [2024-07-26 23:04:39.028247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.563 qpair failed and we were unable to recover it. 00:34:46.563 [2024-07-26 23:04:39.037991] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.563 [2024-07-26 23:04:39.038153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.563 [2024-07-26 23:04:39.038180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.563 [2024-07-26 23:04:39.038199] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.563 [2024-07-26 23:04:39.038213] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.563 [2024-07-26 23:04:39.038244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.563 qpair failed and we were unable to recover it. 00:34:46.563 [2024-07-26 23:04:39.048028] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.563 [2024-07-26 23:04:39.048185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.563 [2024-07-26 23:04:39.048211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.563 [2024-07-26 23:04:39.048226] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.563 [2024-07-26 23:04:39.048238] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.563 [2024-07-26 23:04:39.048267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.563 qpair failed and we were unable to recover it. 00:34:46.563 [2024-07-26 23:04:39.058074] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.563 [2024-07-26 23:04:39.058230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.563 [2024-07-26 23:04:39.058257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.563 [2024-07-26 23:04:39.058271] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.563 [2024-07-26 23:04:39.058285] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.563 [2024-07-26 23:04:39.058315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.563 qpair failed and we were unable to recover it. 00:34:46.823 [2024-07-26 23:04:39.068144] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.823 [2024-07-26 23:04:39.068308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.823 [2024-07-26 23:04:39.068334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.823 [2024-07-26 23:04:39.068349] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.823 [2024-07-26 23:04:39.068362] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.823 [2024-07-26 23:04:39.068392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.823 qpair failed and we were unable to recover it. 00:34:46.823 [2024-07-26 23:04:39.078115] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.823 [2024-07-26 23:04:39.078255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.823 [2024-07-26 23:04:39.078282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.823 [2024-07-26 23:04:39.078297] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.823 [2024-07-26 23:04:39.078310] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.823 [2024-07-26 23:04:39.078353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.823 qpair failed and we were unable to recover it. 00:34:46.823 [2024-07-26 23:04:39.088130] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.823 [2024-07-26 23:04:39.088277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.823 [2024-07-26 23:04:39.088302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.823 [2024-07-26 23:04:39.088316] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.823 [2024-07-26 23:04:39.088328] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.823 [2024-07-26 23:04:39.088357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.823 qpair failed and we were unable to recover it. 00:34:46.824 [2024-07-26 23:04:39.098203] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.824 [2024-07-26 23:04:39.098373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.824 [2024-07-26 23:04:39.098400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.824 [2024-07-26 23:04:39.098416] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.824 [2024-07-26 23:04:39.098429] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.824 [2024-07-26 23:04:39.098465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.824 qpair failed and we were unable to recover it. 00:34:46.824 [2024-07-26 23:04:39.108165] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.824 [2024-07-26 23:04:39.108310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.824 [2024-07-26 23:04:39.108336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.824 [2024-07-26 23:04:39.108351] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.824 [2024-07-26 23:04:39.108364] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.824 [2024-07-26 23:04:39.108394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.824 qpair failed and we were unable to recover it. 00:34:46.824 [2024-07-26 23:04:39.118207] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.824 [2024-07-26 23:04:39.118343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.824 [2024-07-26 23:04:39.118369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.824 [2024-07-26 23:04:39.118390] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.824 [2024-07-26 23:04:39.118404] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.824 [2024-07-26 23:04:39.118436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.824 qpair failed and we were unable to recover it. 00:34:46.824 [2024-07-26 23:04:39.128227] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.824 [2024-07-26 23:04:39.128377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.824 [2024-07-26 23:04:39.128402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.824 [2024-07-26 23:04:39.128417] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.824 [2024-07-26 23:04:39.128430] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.824 [2024-07-26 23:04:39.128459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.824 qpair failed and we were unable to recover it. 00:34:46.824 [2024-07-26 23:04:39.138312] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.824 [2024-07-26 23:04:39.138505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.824 [2024-07-26 23:04:39.138532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.824 [2024-07-26 23:04:39.138551] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.824 [2024-07-26 23:04:39.138565] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.824 [2024-07-26 23:04:39.138595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.824 qpair failed and we were unable to recover it. 00:34:46.824 [2024-07-26 23:04:39.148384] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.824 [2024-07-26 23:04:39.148548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.824 [2024-07-26 23:04:39.148575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.824 [2024-07-26 23:04:39.148590] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.824 [2024-07-26 23:04:39.148603] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.824 [2024-07-26 23:04:39.148644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.824 qpair failed and we were unable to recover it. 00:34:46.824 [2024-07-26 23:04:39.158341] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.824 [2024-07-26 23:04:39.158481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.824 [2024-07-26 23:04:39.158508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.824 [2024-07-26 23:04:39.158522] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.824 [2024-07-26 23:04:39.158536] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.824 [2024-07-26 23:04:39.158566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.824 qpair failed and we were unable to recover it. 00:34:46.824 [2024-07-26 23:04:39.168381] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.824 [2024-07-26 23:04:39.168525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.824 [2024-07-26 23:04:39.168551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.824 [2024-07-26 23:04:39.168565] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.824 [2024-07-26 23:04:39.168578] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.824 [2024-07-26 23:04:39.168608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.824 qpair failed and we were unable to recover it. 00:34:46.824 [2024-07-26 23:04:39.178444] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.824 [2024-07-26 23:04:39.178584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.824 [2024-07-26 23:04:39.178610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.824 [2024-07-26 23:04:39.178624] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.824 [2024-07-26 23:04:39.178637] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.824 [2024-07-26 23:04:39.178667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.824 qpair failed and we were unable to recover it. 00:34:46.824 [2024-07-26 23:04:39.188429] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.824 [2024-07-26 23:04:39.188570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.824 [2024-07-26 23:04:39.188596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.824 [2024-07-26 23:04:39.188610] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.824 [2024-07-26 23:04:39.188623] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.824 [2024-07-26 23:04:39.188653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.824 qpair failed and we were unable to recover it. 00:34:46.824 [2024-07-26 23:04:39.198489] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.824 [2024-07-26 23:04:39.198668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.824 [2024-07-26 23:04:39.198693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.824 [2024-07-26 23:04:39.198708] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.824 [2024-07-26 23:04:39.198722] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.824 [2024-07-26 23:04:39.198751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.824 qpair failed and we were unable to recover it. 00:34:46.824 [2024-07-26 23:04:39.208517] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.824 [2024-07-26 23:04:39.208660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.824 [2024-07-26 23:04:39.208691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.824 [2024-07-26 23:04:39.208707] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.824 [2024-07-26 23:04:39.208720] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.824 [2024-07-26 23:04:39.208762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.824 qpair failed and we were unable to recover it. 00:34:46.824 [2024-07-26 23:04:39.218533] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.824 [2024-07-26 23:04:39.218680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.824 [2024-07-26 23:04:39.218707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.824 [2024-07-26 23:04:39.218721] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.824 [2024-07-26 23:04:39.218734] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.824 [2024-07-26 23:04:39.218764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.824 qpair failed and we were unable to recover it. 00:34:46.824 [2024-07-26 23:04:39.228537] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.825 [2024-07-26 23:04:39.228680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.825 [2024-07-26 23:04:39.228705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.825 [2024-07-26 23:04:39.228720] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.825 [2024-07-26 23:04:39.228733] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.825 [2024-07-26 23:04:39.228765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.825 qpair failed and we were unable to recover it. 00:34:46.825 [2024-07-26 23:04:39.238697] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.825 [2024-07-26 23:04:39.238908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.825 [2024-07-26 23:04:39.238934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.825 [2024-07-26 23:04:39.238949] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.825 [2024-07-26 23:04:39.238962] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.825 [2024-07-26 23:04:39.238991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.825 qpair failed and we were unable to recover it. 00:34:46.825 [2024-07-26 23:04:39.248669] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.825 [2024-07-26 23:04:39.248813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.825 [2024-07-26 23:04:39.248840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.825 [2024-07-26 23:04:39.248855] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.825 [2024-07-26 23:04:39.248868] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.825 [2024-07-26 23:04:39.248904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.825 qpair failed and we were unable to recover it. 00:34:46.825 [2024-07-26 23:04:39.258657] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.825 [2024-07-26 23:04:39.258846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.825 [2024-07-26 23:04:39.258872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.825 [2024-07-26 23:04:39.258886] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.825 [2024-07-26 23:04:39.258900] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.825 [2024-07-26 23:04:39.258929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.825 qpair failed and we were unable to recover it. 00:34:46.825 [2024-07-26 23:04:39.268701] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.825 [2024-07-26 23:04:39.268867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.825 [2024-07-26 23:04:39.268892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.825 [2024-07-26 23:04:39.268906] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.825 [2024-07-26 23:04:39.268920] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.825 [2024-07-26 23:04:39.268949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.825 qpair failed and we were unable to recover it. 00:34:46.825 [2024-07-26 23:04:39.278721] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.825 [2024-07-26 23:04:39.278866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.825 [2024-07-26 23:04:39.278892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.825 [2024-07-26 23:04:39.278906] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.825 [2024-07-26 23:04:39.278919] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.825 [2024-07-26 23:04:39.278950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.825 qpair failed and we were unable to recover it. 00:34:46.825 [2024-07-26 23:04:39.288704] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.825 [2024-07-26 23:04:39.288848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.825 [2024-07-26 23:04:39.288874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.825 [2024-07-26 23:04:39.288888] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.825 [2024-07-26 23:04:39.288901] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.825 [2024-07-26 23:04:39.288931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.825 qpair failed and we were unable to recover it. 00:34:46.825 [2024-07-26 23:04:39.298741] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.825 [2024-07-26 23:04:39.298883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.825 [2024-07-26 23:04:39.298914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.825 [2024-07-26 23:04:39.298930] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.825 [2024-07-26 23:04:39.298943] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.825 [2024-07-26 23:04:39.298973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.825 qpair failed and we were unable to recover it. 00:34:46.825 [2024-07-26 23:04:39.309018] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.825 [2024-07-26 23:04:39.309168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.825 [2024-07-26 23:04:39.309194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.825 [2024-07-26 23:04:39.309208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.825 [2024-07-26 23:04:39.309221] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.825 [2024-07-26 23:04:39.309252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.825 qpair failed and we were unable to recover it. 00:34:46.825 [2024-07-26 23:04:39.318781] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.825 [2024-07-26 23:04:39.318921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.825 [2024-07-26 23:04:39.318947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.825 [2024-07-26 23:04:39.318961] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.825 [2024-07-26 23:04:39.318974] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:46.825 [2024-07-26 23:04:39.319004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.825 qpair failed and we were unable to recover it. 00:34:47.087 [2024-07-26 23:04:39.328801] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.087 [2024-07-26 23:04:39.328999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.087 [2024-07-26 23:04:39.329024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.087 [2024-07-26 23:04:39.329039] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.087 [2024-07-26 23:04:39.329052] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.087 [2024-07-26 23:04:39.329092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.087 qpair failed and we were unable to recover it. 00:34:47.087 [2024-07-26 23:04:39.338886] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.087 [2024-07-26 23:04:39.339039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.087 [2024-07-26 23:04:39.339076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.087 [2024-07-26 23:04:39.339093] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.087 [2024-07-26 23:04:39.339107] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.087 [2024-07-26 23:04:39.339143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.087 qpair failed and we were unable to recover it. 00:34:47.087 [2024-07-26 23:04:39.348870] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.087 [2024-07-26 23:04:39.349031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.087 [2024-07-26 23:04:39.349057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.087 [2024-07-26 23:04:39.349083] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.087 [2024-07-26 23:04:39.349097] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.087 [2024-07-26 23:04:39.349127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.087 qpair failed and we were unable to recover it. 00:34:47.087 [2024-07-26 23:04:39.358899] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.087 [2024-07-26 23:04:39.359048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.087 [2024-07-26 23:04:39.359083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.087 [2024-07-26 23:04:39.359098] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.087 [2024-07-26 23:04:39.359111] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.087 [2024-07-26 23:04:39.359141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.087 qpair failed and we were unable to recover it. 00:34:47.087 [2024-07-26 23:04:39.368911] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.087 [2024-07-26 23:04:39.369091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.087 [2024-07-26 23:04:39.369117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.087 [2024-07-26 23:04:39.369131] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.087 [2024-07-26 23:04:39.369145] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.087 [2024-07-26 23:04:39.369175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.087 qpair failed and we were unable to recover it. 00:34:47.087 [2024-07-26 23:04:39.378950] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.087 [2024-07-26 23:04:39.379105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.087 [2024-07-26 23:04:39.379131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.087 [2024-07-26 23:04:39.379145] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.087 [2024-07-26 23:04:39.379159] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.087 [2024-07-26 23:04:39.379188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.087 qpair failed and we were unable to recover it. 00:34:47.087 [2024-07-26 23:04:39.388998] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.087 [2024-07-26 23:04:39.389149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.087 [2024-07-26 23:04:39.389176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.087 [2024-07-26 23:04:39.389191] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.087 [2024-07-26 23:04:39.389204] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.087 [2024-07-26 23:04:39.389234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.087 qpair failed and we were unable to recover it. 00:34:47.087 [2024-07-26 23:04:39.399001] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.087 [2024-07-26 23:04:39.399152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.087 [2024-07-26 23:04:39.399178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.087 [2024-07-26 23:04:39.399193] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.087 [2024-07-26 23:04:39.399206] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.087 [2024-07-26 23:04:39.399236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.087 qpair failed and we were unable to recover it. 00:34:47.087 [2024-07-26 23:04:39.409074] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.087 [2024-07-26 23:04:39.409253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.087 [2024-07-26 23:04:39.409278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.087 [2024-07-26 23:04:39.409293] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.087 [2024-07-26 23:04:39.409306] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.087 [2024-07-26 23:04:39.409336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.087 qpair failed and we were unable to recover it. 00:34:47.087 [2024-07-26 23:04:39.419070] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.087 [2024-07-26 23:04:39.419211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.088 [2024-07-26 23:04:39.419237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.088 [2024-07-26 23:04:39.419251] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.088 [2024-07-26 23:04:39.419265] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.088 [2024-07-26 23:04:39.419295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.088 qpair failed and we were unable to recover it. 00:34:47.088 [2024-07-26 23:04:39.429127] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.088 [2024-07-26 23:04:39.429276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.088 [2024-07-26 23:04:39.429302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.088 [2024-07-26 23:04:39.429317] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.088 [2024-07-26 23:04:39.429336] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.088 [2024-07-26 23:04:39.429369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.088 qpair failed and we were unable to recover it. 00:34:47.088 [2024-07-26 23:04:39.439144] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.088 [2024-07-26 23:04:39.439288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.088 [2024-07-26 23:04:39.439313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.088 [2024-07-26 23:04:39.439327] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.088 [2024-07-26 23:04:39.439340] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.088 [2024-07-26 23:04:39.439370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.088 qpair failed and we were unable to recover it. 00:34:47.088 [2024-07-26 23:04:39.449175] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.088 [2024-07-26 23:04:39.449339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.088 [2024-07-26 23:04:39.449366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.088 [2024-07-26 23:04:39.449380] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.088 [2024-07-26 23:04:39.449393] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.088 [2024-07-26 23:04:39.449423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.088 qpair failed and we were unable to recover it. 00:34:47.088 [2024-07-26 23:04:39.459163] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.088 [2024-07-26 23:04:39.459307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.088 [2024-07-26 23:04:39.459332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.088 [2024-07-26 23:04:39.459346] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.088 [2024-07-26 23:04:39.459359] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.088 [2024-07-26 23:04:39.459389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.088 qpair failed and we were unable to recover it. 00:34:47.088 [2024-07-26 23:04:39.469220] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.088 [2024-07-26 23:04:39.469372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.088 [2024-07-26 23:04:39.469397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.088 [2024-07-26 23:04:39.469412] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.088 [2024-07-26 23:04:39.469425] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.088 [2024-07-26 23:04:39.469454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.088 qpair failed and we were unable to recover it. 00:34:47.088 [2024-07-26 23:04:39.479243] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.088 [2024-07-26 23:04:39.479396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.088 [2024-07-26 23:04:39.479421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.088 [2024-07-26 23:04:39.479436] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.088 [2024-07-26 23:04:39.479449] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.088 [2024-07-26 23:04:39.479479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.088 qpair failed and we were unable to recover it. 00:34:47.088 [2024-07-26 23:04:39.489314] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.088 [2024-07-26 23:04:39.489497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.088 [2024-07-26 23:04:39.489522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.088 [2024-07-26 23:04:39.489536] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.088 [2024-07-26 23:04:39.489550] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.088 [2024-07-26 23:04:39.489578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.088 qpair failed and we were unable to recover it. 00:34:47.088 [2024-07-26 23:04:39.499372] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.088 [2024-07-26 23:04:39.499510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.088 [2024-07-26 23:04:39.499536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.088 [2024-07-26 23:04:39.499550] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.088 [2024-07-26 23:04:39.499563] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.088 [2024-07-26 23:04:39.499604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.088 qpair failed and we were unable to recover it. 00:34:47.088 [2024-07-26 23:04:39.509304] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.088 [2024-07-26 23:04:39.509465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.088 [2024-07-26 23:04:39.509491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.088 [2024-07-26 23:04:39.509505] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.088 [2024-07-26 23:04:39.509518] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.088 [2024-07-26 23:04:39.509548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.088 qpair failed and we were unable to recover it. 00:34:47.088 [2024-07-26 23:04:39.519362] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.088 [2024-07-26 23:04:39.519537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.088 [2024-07-26 23:04:39.519564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.088 [2024-07-26 23:04:39.519585] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.088 [2024-07-26 23:04:39.519600] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.088 [2024-07-26 23:04:39.519631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.088 qpair failed and we were unable to recover it. 00:34:47.088 [2024-07-26 23:04:39.529420] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.088 [2024-07-26 23:04:39.529583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.088 [2024-07-26 23:04:39.529609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.088 [2024-07-26 23:04:39.529623] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.088 [2024-07-26 23:04:39.529636] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.088 [2024-07-26 23:04:39.529668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.088 qpair failed and we were unable to recover it. 00:34:47.088 [2024-07-26 23:04:39.539410] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.088 [2024-07-26 23:04:39.539556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.088 [2024-07-26 23:04:39.539582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.088 [2024-07-26 23:04:39.539596] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.088 [2024-07-26 23:04:39.539609] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.089 [2024-07-26 23:04:39.539638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.089 qpair failed and we were unable to recover it. 00:34:47.089 [2024-07-26 23:04:39.549526] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.089 [2024-07-26 23:04:39.549661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.089 [2024-07-26 23:04:39.549687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.089 [2024-07-26 23:04:39.549702] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.089 [2024-07-26 23:04:39.549715] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.089 [2024-07-26 23:04:39.549757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.089 qpair failed and we were unable to recover it. 00:34:47.089 [2024-07-26 23:04:39.559468] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.089 [2024-07-26 23:04:39.559624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.089 [2024-07-26 23:04:39.559650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.089 [2024-07-26 23:04:39.559664] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.089 [2024-07-26 23:04:39.559677] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.089 [2024-07-26 23:04:39.559706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.089 qpair failed and we were unable to recover it. 00:34:47.089 [2024-07-26 23:04:39.569582] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.089 [2024-07-26 23:04:39.569725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.089 [2024-07-26 23:04:39.569751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.089 [2024-07-26 23:04:39.569765] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.089 [2024-07-26 23:04:39.569778] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.089 [2024-07-26 23:04:39.569819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.089 qpair failed and we were unable to recover it. 00:34:47.089 [2024-07-26 23:04:39.579522] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.089 [2024-07-26 23:04:39.579693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.089 [2024-07-26 23:04:39.579718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.089 [2024-07-26 23:04:39.579733] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.089 [2024-07-26 23:04:39.579746] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.089 [2024-07-26 23:04:39.579775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.089 qpair failed and we were unable to recover it. 00:34:47.349 [2024-07-26 23:04:39.589582] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.349 [2024-07-26 23:04:39.589739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.349 [2024-07-26 23:04:39.589765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.349 [2024-07-26 23:04:39.589779] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.349 [2024-07-26 23:04:39.589793] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.349 [2024-07-26 23:04:39.589822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.349 qpair failed and we were unable to recover it. 00:34:47.349 [2024-07-26 23:04:39.599609] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.349 [2024-07-26 23:04:39.599749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.349 [2024-07-26 23:04:39.599775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.349 [2024-07-26 23:04:39.599790] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.349 [2024-07-26 23:04:39.599803] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.349 [2024-07-26 23:04:39.599834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.349 qpair failed and we were unable to recover it. 00:34:47.349 [2024-07-26 23:04:39.609608] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.349 [2024-07-26 23:04:39.609752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.349 [2024-07-26 23:04:39.609784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.349 [2024-07-26 23:04:39.609799] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.349 [2024-07-26 23:04:39.609812] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.349 [2024-07-26 23:04:39.609842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.349 qpair failed and we were unable to recover it. 00:34:47.349 [2024-07-26 23:04:39.619666] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.349 [2024-07-26 23:04:39.619810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.349 [2024-07-26 23:04:39.619836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.349 [2024-07-26 23:04:39.619850] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.349 [2024-07-26 23:04:39.619863] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.349 [2024-07-26 23:04:39.619894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.349 qpair failed and we were unable to recover it. 00:34:47.349 [2024-07-26 23:04:39.629657] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.349 [2024-07-26 23:04:39.629797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.349 [2024-07-26 23:04:39.629822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.349 [2024-07-26 23:04:39.629836] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.349 [2024-07-26 23:04:39.629849] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.349 [2024-07-26 23:04:39.629879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.349 qpair failed and we were unable to recover it. 00:34:47.349 [2024-07-26 23:04:39.639696] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.349 [2024-07-26 23:04:39.639845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.349 [2024-07-26 23:04:39.639870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.349 [2024-07-26 23:04:39.639884] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.349 [2024-07-26 23:04:39.639897] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.349 [2024-07-26 23:04:39.639928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.349 qpair failed and we were unable to recover it. 00:34:47.349 [2024-07-26 23:04:39.649779] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.349 [2024-07-26 23:04:39.649931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.349 [2024-07-26 23:04:39.649958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.349 [2024-07-26 23:04:39.649978] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.349 [2024-07-26 23:04:39.649992] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.349 [2024-07-26 23:04:39.650028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.349 qpair failed and we were unable to recover it. 00:34:47.349 [2024-07-26 23:04:39.659765] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.349 [2024-07-26 23:04:39.659901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.349 [2024-07-26 23:04:39.659928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.349 [2024-07-26 23:04:39.659942] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.349 [2024-07-26 23:04:39.659955] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.349 [2024-07-26 23:04:39.659999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.349 qpair failed and we were unable to recover it. 00:34:47.349 [2024-07-26 23:04:39.669795] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.349 [2024-07-26 23:04:39.669971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.349 [2024-07-26 23:04:39.669997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.350 [2024-07-26 23:04:39.670012] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.350 [2024-07-26 23:04:39.670025] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.350 [2024-07-26 23:04:39.670057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.350 qpair failed and we were unable to recover it. 00:34:47.350 [2024-07-26 23:04:39.679844] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.350 [2024-07-26 23:04:39.679997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.350 [2024-07-26 23:04:39.680024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.350 [2024-07-26 23:04:39.680038] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.350 [2024-07-26 23:04:39.680051] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.350 [2024-07-26 23:04:39.680090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.350 qpair failed and we were unable to recover it. 00:34:47.350 [2024-07-26 23:04:39.689846] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.350 [2024-07-26 23:04:39.689992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.350 [2024-07-26 23:04:39.690018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.350 [2024-07-26 23:04:39.690032] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.350 [2024-07-26 23:04:39.690045] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.350 [2024-07-26 23:04:39.690084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.350 qpair failed and we were unable to recover it. 00:34:47.350 [2024-07-26 23:04:39.699849] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.350 [2024-07-26 23:04:39.699992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.350 [2024-07-26 23:04:39.700023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.350 [2024-07-26 23:04:39.700038] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.350 [2024-07-26 23:04:39.700051] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.350 [2024-07-26 23:04:39.700092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.350 qpair failed and we were unable to recover it. 00:34:47.350 [2024-07-26 23:04:39.709880] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.350 [2024-07-26 23:04:39.710055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.350 [2024-07-26 23:04:39.710087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.350 [2024-07-26 23:04:39.710102] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.350 [2024-07-26 23:04:39.710115] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.350 [2024-07-26 23:04:39.710147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.350 qpair failed and we were unable to recover it. 00:34:47.350 [2024-07-26 23:04:39.720048] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.350 [2024-07-26 23:04:39.720233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.350 [2024-07-26 23:04:39.720260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.350 [2024-07-26 23:04:39.720274] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.350 [2024-07-26 23:04:39.720287] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.350 [2024-07-26 23:04:39.720329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.350 qpair failed and we were unable to recover it. 00:34:47.350 [2024-07-26 23:04:39.729950] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.350 [2024-07-26 23:04:39.730147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.350 [2024-07-26 23:04:39.730174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.350 [2024-07-26 23:04:39.730189] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.350 [2024-07-26 23:04:39.730202] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.350 [2024-07-26 23:04:39.730232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.350 qpair failed and we were unable to recover it. 00:34:47.350 [2024-07-26 23:04:39.739985] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.350 [2024-07-26 23:04:39.740140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.350 [2024-07-26 23:04:39.740167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.350 [2024-07-26 23:04:39.740181] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.350 [2024-07-26 23:04:39.740194] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.350 [2024-07-26 23:04:39.740233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.350 qpair failed and we were unable to recover it. 00:34:47.350 [2024-07-26 23:04:39.750008] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.350 [2024-07-26 23:04:39.750196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.350 [2024-07-26 23:04:39.750222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.350 [2024-07-26 23:04:39.750236] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.350 [2024-07-26 23:04:39.750250] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.350 [2024-07-26 23:04:39.750279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.350 qpair failed and we were unable to recover it. 00:34:47.350 [2024-07-26 23:04:39.760073] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.350 [2024-07-26 23:04:39.760225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.350 [2024-07-26 23:04:39.760250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.350 [2024-07-26 23:04:39.760264] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.350 [2024-07-26 23:04:39.760277] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.350 [2024-07-26 23:04:39.760307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.350 qpair failed and we were unable to recover it. 00:34:47.350 [2024-07-26 23:04:39.770099] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.350 [2024-07-26 23:04:39.770245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.350 [2024-07-26 23:04:39.770272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.350 [2024-07-26 23:04:39.770286] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.350 [2024-07-26 23:04:39.770299] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.350 [2024-07-26 23:04:39.770329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.350 qpair failed and we were unable to recover it. 00:34:47.350 [2024-07-26 23:04:39.780109] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.350 [2024-07-26 23:04:39.780271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.350 [2024-07-26 23:04:39.780296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.350 [2024-07-26 23:04:39.780310] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.350 [2024-07-26 23:04:39.780323] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.350 [2024-07-26 23:04:39.780355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.350 qpair failed and we were unable to recover it. 00:34:47.350 [2024-07-26 23:04:39.790142] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.350 [2024-07-26 23:04:39.790285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.350 [2024-07-26 23:04:39.790317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.350 [2024-07-26 23:04:39.790332] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.350 [2024-07-26 23:04:39.790345] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.350 [2024-07-26 23:04:39.790388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.350 qpair failed and we were unable to recover it. 00:34:47.350 [2024-07-26 23:04:39.800216] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.350 [2024-07-26 23:04:39.800400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.350 [2024-07-26 23:04:39.800426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.350 [2024-07-26 23:04:39.800441] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.350 [2024-07-26 23:04:39.800454] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.351 [2024-07-26 23:04:39.800484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.351 qpair failed and we were unable to recover it. 00:34:47.351 [2024-07-26 23:04:39.810229] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.351 [2024-07-26 23:04:39.810383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.351 [2024-07-26 23:04:39.810409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.351 [2024-07-26 23:04:39.810423] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.351 [2024-07-26 23:04:39.810436] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.351 [2024-07-26 23:04:39.810466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.351 qpair failed and we were unable to recover it. 00:34:47.351 [2024-07-26 23:04:39.820225] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.351 [2024-07-26 23:04:39.820364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.351 [2024-07-26 23:04:39.820390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.351 [2024-07-26 23:04:39.820404] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.351 [2024-07-26 23:04:39.820418] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.351 [2024-07-26 23:04:39.820447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.351 qpair failed and we were unable to recover it. 00:34:47.351 [2024-07-26 23:04:39.830311] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.351 [2024-07-26 23:04:39.830480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.351 [2024-07-26 23:04:39.830506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.351 [2024-07-26 23:04:39.830520] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.351 [2024-07-26 23:04:39.830539] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.351 [2024-07-26 23:04:39.830571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.351 qpair failed and we were unable to recover it. 00:34:47.351 [2024-07-26 23:04:39.840293] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.351 [2024-07-26 23:04:39.840468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.351 [2024-07-26 23:04:39.840494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.351 [2024-07-26 23:04:39.840508] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.351 [2024-07-26 23:04:39.840521] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.351 [2024-07-26 23:04:39.840565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.351 qpair failed and we were unable to recover it. 00:34:47.351 [2024-07-26 23:04:39.850394] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.351 [2024-07-26 23:04:39.850544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.351 [2024-07-26 23:04:39.850570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.351 [2024-07-26 23:04:39.850584] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.351 [2024-07-26 23:04:39.850597] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.351 [2024-07-26 23:04:39.850639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.351 qpair failed and we were unable to recover it. 00:34:47.611 [2024-07-26 23:04:39.860375] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.611 [2024-07-26 23:04:39.860581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.611 [2024-07-26 23:04:39.860607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.611 [2024-07-26 23:04:39.860621] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.611 [2024-07-26 23:04:39.860634] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.611 [2024-07-26 23:04:39.860664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.611 qpair failed and we were unable to recover it. 00:34:47.611 [2024-07-26 23:04:39.870446] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.611 [2024-07-26 23:04:39.870588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.611 [2024-07-26 23:04:39.870613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.611 [2024-07-26 23:04:39.870628] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.611 [2024-07-26 23:04:39.870641] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.611 [2024-07-26 23:04:39.870670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.611 qpair failed and we were unable to recover it. 00:34:47.611 [2024-07-26 23:04:39.880385] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.611 [2024-07-26 23:04:39.880586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.611 [2024-07-26 23:04:39.880612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.611 [2024-07-26 23:04:39.880626] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.611 [2024-07-26 23:04:39.880639] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.611 [2024-07-26 23:04:39.880668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.611 qpair failed and we were unable to recover it. 00:34:47.611 [2024-07-26 23:04:39.890418] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.611 [2024-07-26 23:04:39.890608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.611 [2024-07-26 23:04:39.890634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.611 [2024-07-26 23:04:39.890648] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.611 [2024-07-26 23:04:39.890661] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.611 [2024-07-26 23:04:39.890691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.611 qpair failed and we were unable to recover it. 00:34:47.611 [2024-07-26 23:04:39.900460] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.611 [2024-07-26 23:04:39.900608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.611 [2024-07-26 23:04:39.900633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.611 [2024-07-26 23:04:39.900648] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.611 [2024-07-26 23:04:39.900661] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.611 [2024-07-26 23:04:39.900691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.611 qpair failed and we were unable to recover it. 00:34:47.611 [2024-07-26 23:04:39.910509] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.611 [2024-07-26 23:04:39.910658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.611 [2024-07-26 23:04:39.910684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.611 [2024-07-26 23:04:39.910698] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.611 [2024-07-26 23:04:39.910711] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.611 [2024-07-26 23:04:39.910743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.611 qpair failed and we were unable to recover it. 00:34:47.611 [2024-07-26 23:04:39.920577] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.611 [2024-07-26 23:04:39.920708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.611 [2024-07-26 23:04:39.920735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.611 [2024-07-26 23:04:39.920755] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.611 [2024-07-26 23:04:39.920769] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.611 [2024-07-26 23:04:39.920814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.611 qpair failed and we were unable to recover it. 00:34:47.611 [2024-07-26 23:04:39.930556] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.611 [2024-07-26 23:04:39.930705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.611 [2024-07-26 23:04:39.930731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.611 [2024-07-26 23:04:39.930745] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.611 [2024-07-26 23:04:39.930758] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.611 [2024-07-26 23:04:39.930788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.611 qpair failed and we were unable to recover it. 00:34:47.611 [2024-07-26 23:04:39.940566] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.611 [2024-07-26 23:04:39.940714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.611 [2024-07-26 23:04:39.940740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.611 [2024-07-26 23:04:39.940754] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.611 [2024-07-26 23:04:39.940767] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.611 [2024-07-26 23:04:39.940796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.611 qpair failed and we were unable to recover it. 00:34:47.611 [2024-07-26 23:04:39.950615] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.611 [2024-07-26 23:04:39.950757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.611 [2024-07-26 23:04:39.950782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.611 [2024-07-26 23:04:39.950796] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.611 [2024-07-26 23:04:39.950809] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.611 [2024-07-26 23:04:39.950839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.611 qpair failed and we were unable to recover it. 00:34:47.611 [2024-07-26 23:04:39.960615] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.611 [2024-07-26 23:04:39.960753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.611 [2024-07-26 23:04:39.960778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.611 [2024-07-26 23:04:39.960793] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.611 [2024-07-26 23:04:39.960806] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.611 [2024-07-26 23:04:39.960836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.611 qpair failed and we were unable to recover it. 00:34:47.611 [2024-07-26 23:04:39.970681] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.611 [2024-07-26 23:04:39.970825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.611 [2024-07-26 23:04:39.970850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.611 [2024-07-26 23:04:39.970865] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.611 [2024-07-26 23:04:39.970878] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.612 [2024-07-26 23:04:39.970907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.612 qpair failed and we were unable to recover it. 00:34:47.612 [2024-07-26 23:04:39.980659] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.612 [2024-07-26 23:04:39.980812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.612 [2024-07-26 23:04:39.980838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.612 [2024-07-26 23:04:39.980852] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.612 [2024-07-26 23:04:39.980865] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.612 [2024-07-26 23:04:39.980894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.612 qpair failed and we were unable to recover it. 00:34:47.612 [2024-07-26 23:04:39.990714] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.612 [2024-07-26 23:04:39.990851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.612 [2024-07-26 23:04:39.990877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.612 [2024-07-26 23:04:39.990892] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.612 [2024-07-26 23:04:39.990905] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.612 [2024-07-26 23:04:39.990934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.612 qpair failed and we were unable to recover it. 00:34:47.612 [2024-07-26 23:04:40.000850] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.612 [2024-07-26 23:04:40.001033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.612 [2024-07-26 23:04:40.001067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.612 [2024-07-26 23:04:40.001085] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.612 [2024-07-26 23:04:40.001099] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.612 [2024-07-26 23:04:40.001141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.612 qpair failed and we were unable to recover it. 00:34:47.612 [2024-07-26 23:04:40.010833] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.612 [2024-07-26 23:04:40.011014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.612 [2024-07-26 23:04:40.011044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.612 [2024-07-26 23:04:40.011076] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.612 [2024-07-26 23:04:40.011093] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.612 [2024-07-26 23:04:40.011127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.612 qpair failed and we were unable to recover it. 00:34:47.612 [2024-07-26 23:04:40.020858] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.612 [2024-07-26 23:04:40.021071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.612 [2024-07-26 23:04:40.021098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.612 [2024-07-26 23:04:40.021113] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.612 [2024-07-26 23:04:40.021126] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.612 [2024-07-26 23:04:40.021156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.612 qpair failed and we were unable to recover it. 00:34:47.612 [2024-07-26 23:04:40.030820] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.612 [2024-07-26 23:04:40.030967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.612 [2024-07-26 23:04:40.030994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.612 [2024-07-26 23:04:40.031009] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.612 [2024-07-26 23:04:40.031022] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.612 [2024-07-26 23:04:40.031052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.612 qpair failed and we were unable to recover it. 00:34:47.612 [2024-07-26 23:04:40.040916] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.612 [2024-07-26 23:04:40.041070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.612 [2024-07-26 23:04:40.041097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.612 [2024-07-26 23:04:40.041117] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.612 [2024-07-26 23:04:40.041131] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.612 [2024-07-26 23:04:40.041162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.612 qpair failed and we were unable to recover it. 00:34:47.612 [2024-07-26 23:04:40.050916] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.612 [2024-07-26 23:04:40.051068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.612 [2024-07-26 23:04:40.051095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.612 [2024-07-26 23:04:40.051110] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.612 [2024-07-26 23:04:40.051122] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.612 [2024-07-26 23:04:40.051151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.612 qpair failed and we were unable to recover it. 00:34:47.612 [2024-07-26 23:04:40.060927] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.612 [2024-07-26 23:04:40.061080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.612 [2024-07-26 23:04:40.061107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.612 [2024-07-26 23:04:40.061122] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.612 [2024-07-26 23:04:40.061135] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.612 [2024-07-26 23:04:40.061180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.612 qpair failed and we were unable to recover it. 00:34:47.612 [2024-07-26 23:04:40.070939] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.612 [2024-07-26 23:04:40.071090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.612 [2024-07-26 23:04:40.071116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.612 [2024-07-26 23:04:40.071131] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.612 [2024-07-26 23:04:40.071144] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.612 [2024-07-26 23:04:40.071174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.612 qpair failed and we were unable to recover it. 00:34:47.612 [2024-07-26 23:04:40.080949] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.612 [2024-07-26 23:04:40.081141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.612 [2024-07-26 23:04:40.081169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.612 [2024-07-26 23:04:40.081188] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.612 [2024-07-26 23:04:40.081202] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.612 [2024-07-26 23:04:40.081235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.612 qpair failed and we were unable to recover it. 00:34:47.612 [2024-07-26 23:04:40.091015] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.612 [2024-07-26 23:04:40.091173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.612 [2024-07-26 23:04:40.091198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.612 [2024-07-26 23:04:40.091211] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.612 [2024-07-26 23:04:40.091224] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.612 [2024-07-26 23:04:40.091253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.612 qpair failed and we were unable to recover it. 00:34:47.612 [2024-07-26 23:04:40.101026] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.612 [2024-07-26 23:04:40.101210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.612 [2024-07-26 23:04:40.101241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.612 [2024-07-26 23:04:40.101257] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.612 [2024-07-26 23:04:40.101270] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.612 [2024-07-26 23:04:40.101301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.612 qpair failed and we were unable to recover it. 00:34:47.613 [2024-07-26 23:04:40.111035] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.613 [2024-07-26 23:04:40.111187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.613 [2024-07-26 23:04:40.111214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.613 [2024-07-26 23:04:40.111228] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.613 [2024-07-26 23:04:40.111241] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.613 [2024-07-26 23:04:40.111271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.613 qpair failed and we were unable to recover it. 00:34:47.873 [2024-07-26 23:04:40.121075] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.873 [2024-07-26 23:04:40.121256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.873 [2024-07-26 23:04:40.121282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.873 [2024-07-26 23:04:40.121297] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.873 [2024-07-26 23:04:40.121311] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.873 [2024-07-26 23:04:40.121343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.873 qpair failed and we were unable to recover it. 00:34:47.873 [2024-07-26 23:04:40.131109] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.873 [2024-07-26 23:04:40.131271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.873 [2024-07-26 23:04:40.131297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.873 [2024-07-26 23:04:40.131312] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.873 [2024-07-26 23:04:40.131325] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.873 [2024-07-26 23:04:40.131355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.873 qpair failed and we were unable to recover it. 00:34:47.873 [2024-07-26 23:04:40.141214] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.873 [2024-07-26 23:04:40.141358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.873 [2024-07-26 23:04:40.141388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.873 [2024-07-26 23:04:40.141403] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.873 [2024-07-26 23:04:40.141416] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.873 [2024-07-26 23:04:40.141464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.873 qpair failed and we were unable to recover it. 00:34:47.873 [2024-07-26 23:04:40.151184] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.873 [2024-07-26 23:04:40.151337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.873 [2024-07-26 23:04:40.151366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.873 [2024-07-26 23:04:40.151381] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.873 [2024-07-26 23:04:40.151394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.873 [2024-07-26 23:04:40.151423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.873 qpair failed and we were unable to recover it. 00:34:47.873 [2024-07-26 23:04:40.161212] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.873 [2024-07-26 23:04:40.161364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.873 [2024-07-26 23:04:40.161391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.873 [2024-07-26 23:04:40.161406] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.873 [2024-07-26 23:04:40.161419] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.873 [2024-07-26 23:04:40.161449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.873 qpair failed and we were unable to recover it. 00:34:47.873 [2024-07-26 23:04:40.171307] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.873 [2024-07-26 23:04:40.171473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.873 [2024-07-26 23:04:40.171501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.873 [2024-07-26 23:04:40.171516] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.873 [2024-07-26 23:04:40.171529] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.873 [2024-07-26 23:04:40.171559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.873 qpair failed and we were unable to recover it. 00:34:47.873 [2024-07-26 23:04:40.181282] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.873 [2024-07-26 23:04:40.181438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.873 [2024-07-26 23:04:40.181465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.873 [2024-07-26 23:04:40.181480] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.873 [2024-07-26 23:04:40.181493] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.873 [2024-07-26 23:04:40.181523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.873 qpair failed and we were unable to recover it. 00:34:47.873 [2024-07-26 23:04:40.191324] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.873 [2024-07-26 23:04:40.191511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.873 [2024-07-26 23:04:40.191542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.873 [2024-07-26 23:04:40.191557] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.873 [2024-07-26 23:04:40.191570] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.873 [2024-07-26 23:04:40.191599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.873 qpair failed and we were unable to recover it. 00:34:47.873 [2024-07-26 23:04:40.201323] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.873 [2024-07-26 23:04:40.201493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.873 [2024-07-26 23:04:40.201519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.873 [2024-07-26 23:04:40.201534] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.873 [2024-07-26 23:04:40.201547] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.873 [2024-07-26 23:04:40.201578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.873 qpair failed and we were unable to recover it. 00:34:47.873 [2024-07-26 23:04:40.211462] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.873 [2024-07-26 23:04:40.211607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.873 [2024-07-26 23:04:40.211633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.873 [2024-07-26 23:04:40.211648] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.873 [2024-07-26 23:04:40.211661] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.873 [2024-07-26 23:04:40.211703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.873 qpair failed and we were unable to recover it. 00:34:47.873 [2024-07-26 23:04:40.221385] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.873 [2024-07-26 23:04:40.221554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.873 [2024-07-26 23:04:40.221580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.873 [2024-07-26 23:04:40.221594] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.874 [2024-07-26 23:04:40.221608] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.874 [2024-07-26 23:04:40.221637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.874 qpair failed and we were unable to recover it. 00:34:47.874 [2024-07-26 23:04:40.231415] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.874 [2024-07-26 23:04:40.231551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.874 [2024-07-26 23:04:40.231577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.874 [2024-07-26 23:04:40.231592] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.874 [2024-07-26 23:04:40.231611] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.874 [2024-07-26 23:04:40.231643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.874 qpair failed and we were unable to recover it. 00:34:47.874 [2024-07-26 23:04:40.241404] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.874 [2024-07-26 23:04:40.241549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.874 [2024-07-26 23:04:40.241576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.874 [2024-07-26 23:04:40.241591] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.874 [2024-07-26 23:04:40.241605] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.874 [2024-07-26 23:04:40.241637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.874 qpair failed and we were unable to recover it. 00:34:47.874 [2024-07-26 23:04:40.251503] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.874 [2024-07-26 23:04:40.251658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.874 [2024-07-26 23:04:40.251685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.874 [2024-07-26 23:04:40.251700] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.874 [2024-07-26 23:04:40.251713] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.874 [2024-07-26 23:04:40.251745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.874 qpair failed and we were unable to recover it. 00:34:47.874 [2024-07-26 23:04:40.261526] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.874 [2024-07-26 23:04:40.261712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.874 [2024-07-26 23:04:40.261739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.874 [2024-07-26 23:04:40.261753] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.874 [2024-07-26 23:04:40.261766] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.874 [2024-07-26 23:04:40.261796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.874 qpair failed and we were unable to recover it. 00:34:47.874 [2024-07-26 23:04:40.271510] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.874 [2024-07-26 23:04:40.271646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.874 [2024-07-26 23:04:40.271672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.874 [2024-07-26 23:04:40.271687] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.874 [2024-07-26 23:04:40.271700] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.874 [2024-07-26 23:04:40.271729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.874 qpair failed and we were unable to recover it. 00:34:47.874 [2024-07-26 23:04:40.281540] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.874 [2024-07-26 23:04:40.281690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.874 [2024-07-26 23:04:40.281716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.874 [2024-07-26 23:04:40.281731] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.874 [2024-07-26 23:04:40.281744] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.874 [2024-07-26 23:04:40.281774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.874 qpair failed and we were unable to recover it. 00:34:47.874 [2024-07-26 23:04:40.291592] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.874 [2024-07-26 23:04:40.291735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.874 [2024-07-26 23:04:40.291761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.874 [2024-07-26 23:04:40.291775] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.874 [2024-07-26 23:04:40.291789] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.874 [2024-07-26 23:04:40.291831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.874 qpair failed and we were unable to recover it. 00:34:47.874 [2024-07-26 23:04:40.301604] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.874 [2024-07-26 23:04:40.301758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.874 [2024-07-26 23:04:40.301784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.874 [2024-07-26 23:04:40.301799] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.874 [2024-07-26 23:04:40.301812] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.874 [2024-07-26 23:04:40.301844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.874 qpair failed and we were unable to recover it. 00:34:47.874 [2024-07-26 23:04:40.311638] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.874 [2024-07-26 23:04:40.311775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.874 [2024-07-26 23:04:40.311801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.874 [2024-07-26 23:04:40.311815] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.874 [2024-07-26 23:04:40.311828] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.874 [2024-07-26 23:04:40.311857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.874 qpair failed and we were unable to recover it. 00:34:47.874 [2024-07-26 23:04:40.321678] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.874 [2024-07-26 23:04:40.321884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.874 [2024-07-26 23:04:40.321912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.874 [2024-07-26 23:04:40.321934] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.874 [2024-07-26 23:04:40.321948] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.874 [2024-07-26 23:04:40.321980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.874 qpair failed and we were unable to recover it. 00:34:47.874 [2024-07-26 23:04:40.331718] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.874 [2024-07-26 23:04:40.331862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.874 [2024-07-26 23:04:40.331889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.874 [2024-07-26 23:04:40.331903] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.874 [2024-07-26 23:04:40.331916] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.874 [2024-07-26 23:04:40.331947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.874 qpair failed and we were unable to recover it. 00:34:47.874 [2024-07-26 23:04:40.341742] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.874 [2024-07-26 23:04:40.341923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.874 [2024-07-26 23:04:40.341949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.874 [2024-07-26 23:04:40.341964] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.874 [2024-07-26 23:04:40.341977] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.874 [2024-07-26 23:04:40.342006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.874 qpair failed and we were unable to recover it. 00:34:47.874 [2024-07-26 23:04:40.351726] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.874 [2024-07-26 23:04:40.351870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.874 [2024-07-26 23:04:40.351896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.874 [2024-07-26 23:04:40.351911] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.874 [2024-07-26 23:04:40.351924] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.875 [2024-07-26 23:04:40.351953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.875 qpair failed and we were unable to recover it. 00:34:47.875 [2024-07-26 23:04:40.361758] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.875 [2024-07-26 23:04:40.361908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.875 [2024-07-26 23:04:40.361933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.875 [2024-07-26 23:04:40.361947] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.875 [2024-07-26 23:04:40.361960] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.875 [2024-07-26 23:04:40.361990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.875 qpair failed and we were unable to recover it. 00:34:47.875 [2024-07-26 23:04:40.371869] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.875 [2024-07-26 23:04:40.372054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.875 [2024-07-26 23:04:40.372086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.875 [2024-07-26 23:04:40.372100] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.875 [2024-07-26 23:04:40.372113] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd440000b90 00:34:47.875 [2024-07-26 23:04:40.372145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:47.875 qpair failed and we were unable to recover it. 00:34:48.134 [2024-07-26 23:04:40.381849] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.134 [2024-07-26 23:04:40.382009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.134 [2024-07-26 23:04:40.382041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.134 [2024-07-26 23:04:40.382057] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.134 [2024-07-26 23:04:40.382085] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da570 00:34:48.134 [2024-07-26 23:04:40.382117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.134 qpair failed and we were unable to recover it. 00:34:48.134 [2024-07-26 23:04:40.391904] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.134 [2024-07-26 23:04:40.392045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.134 [2024-07-26 23:04:40.392079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.134 [2024-07-26 23:04:40.392095] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.134 [2024-07-26 23:04:40.392108] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da570 00:34:48.134 [2024-07-26 23:04:40.392137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.134 qpair failed and we were unable to recover it. 00:34:48.134 [2024-07-26 23:04:40.401866] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.134 [2024-07-26 23:04:40.402007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.134 [2024-07-26 23:04:40.402040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.134 [2024-07-26 23:04:40.402066] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.134 [2024-07-26 23:04:40.402082] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd448000b90 00:34:48.134 [2024-07-26 23:04:40.402114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:48.134 qpair failed and we were unable to recover it. 00:34:48.134 [2024-07-26 23:04:40.411913] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.134 [2024-07-26 23:04:40.412067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.134 [2024-07-26 23:04:40.412094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.134 [2024-07-26 23:04:40.412114] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.134 [2024-07-26 23:04:40.412129] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd448000b90 00:34:48.134 [2024-07-26 23:04:40.412159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:48.134 qpair failed and we were unable to recover it. 00:34:48.134 [2024-07-26 23:04:40.412280] nvme_ctrlr.c:4353:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:34:48.134 A controller has encountered a failure and is being reset. 00:34:48.134 [2024-07-26 23:04:40.421939] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.134 [2024-07-26 23:04:40.422085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.134 [2024-07-26 23:04:40.422117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.134 [2024-07-26 23:04:40.422143] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.134 [2024-07-26 23:04:40.422168] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd438000b90 00:34:48.134 [2024-07-26 23:04:40.422217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:48.134 qpair failed and we were unable to recover it. 00:34:48.134 [2024-07-26 23:04:40.432089] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.134 [2024-07-26 23:04:40.432233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.134 [2024-07-26 23:04:40.432261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.134 [2024-07-26 23:04:40.432285] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.134 [2024-07-26 23:04:40.432310] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd438000b90 00:34:48.134 [2024-07-26 23:04:40.432369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:48.134 qpair failed and we were unable to recover it. 00:34:48.134 [2024-07-26 23:04:40.432481] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e80f0 (9): Bad file descriptor 00:34:48.134 Controller properly reset. 00:34:48.134 Initializing NVMe Controllers 00:34:48.134 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:48.134 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:48.134 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:48.134 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:48.134 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:48.134 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:48.134 Initialization complete. Launching workers. 00:34:48.134 Starting thread on core 1 00:34:48.134 Starting thread on core 2 00:34:48.134 Starting thread on core 3 00:34:48.134 Starting thread on core 0 00:34:48.134 23:04:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:48.134 00:34:48.134 real 0m10.932s 00:34:48.134 user 0m16.876s 00:34:48.134 sys 0m5.723s 00:34:48.134 23:04:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:48.134 23:04:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:48.134 ************************************ 00:34:48.134 END TEST nvmf_target_disconnect_tc2 00:34:48.134 ************************************ 00:34:48.134 23:04:40 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:34:48.134 23:04:40 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:48.134 23:04:40 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:48.134 23:04:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:48.134 23:04:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:34:48.134 23:04:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:48.134 23:04:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:34:48.134 23:04:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:48.134 23:04:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:48.134 rmmod nvme_tcp 00:34:48.134 rmmod nvme_fabrics 00:34:48.134 rmmod nvme_keyring 00:34:48.134 23:04:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:48.134 23:04:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:34:48.134 23:04:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:34:48.134 23:04:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3700148 ']' 00:34:48.134 23:04:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3700148 00:34:48.134 23:04:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 3700148 ']' 00:34:48.134 23:04:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 3700148 00:34:48.134 23:04:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:34:48.134 23:04:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:48.134 23:04:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3700148 00:34:48.394 23:04:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:34:48.394 23:04:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:34:48.394 23:04:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3700148' 00:34:48.394 killing process with pid 3700148 00:34:48.394 23:04:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 3700148 00:34:48.394 23:04:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 3700148 00:34:48.394 23:04:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:48.394 23:04:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:48.394 23:04:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:48.394 23:04:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:48.394 23:04:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:48.394 23:04:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:48.394 23:04:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:48.394 23:04:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:50.928 23:04:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:50.928 00:34:50.928 real 0m15.715s 00:34:50.928 user 0m43.662s 00:34:50.928 sys 0m7.667s 00:34:50.928 23:04:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:50.928 23:04:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:50.928 ************************************ 00:34:50.928 END TEST nvmf_target_disconnect 00:34:50.928 ************************************ 00:34:50.928 23:04:42 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:34:50.928 23:04:42 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:50.928 23:04:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:50.928 23:04:42 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:34:50.928 00:34:50.928 real 27m0.292s 00:34:50.928 user 73m49.541s 00:34:50.928 sys 6m28.593s 00:34:50.928 23:04:42 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:50.928 23:04:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:50.928 ************************************ 00:34:50.928 END TEST nvmf_tcp 00:34:50.928 ************************************ 00:34:50.928 23:04:42 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:34:50.928 23:04:42 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:50.928 23:04:42 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:50.928 23:04:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:50.928 23:04:42 -- common/autotest_common.sh@10 -- # set +x 00:34:50.928 ************************************ 00:34:50.928 START TEST spdkcli_nvmf_tcp 00:34:50.928 ************************************ 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:50.928 * Looking for test storage... 00:34:50.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:50.928 23:04:43 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:50.929 23:04:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:50.929 23:04:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:50.929 23:04:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:50.929 23:04:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:50.929 23:04:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:50.929 23:04:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:50.929 23:04:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:50.929 23:04:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3701347 00:34:50.929 23:04:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:50.929 23:04:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3701347 00:34:50.929 23:04:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 3701347 ']' 00:34:50.929 23:04:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:50.929 23:04:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:50.929 23:04:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:50.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:50.929 23:04:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:50.929 23:04:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:50.929 [2024-07-26 23:04:43.117718] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:34:50.929 [2024-07-26 23:04:43.117801] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3701347 ] 00:34:50.929 EAL: No free 2048 kB hugepages reported on node 1 00:34:50.929 [2024-07-26 23:04:43.178107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:50.929 [2024-07-26 23:04:43.271085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:50.929 [2024-07-26 23:04:43.271097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:50.929 23:04:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:50.929 23:04:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:34:50.929 23:04:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:50.929 23:04:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:50.929 23:04:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:50.929 23:04:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:50.929 23:04:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:50.929 23:04:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:50.929 23:04:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:50.929 23:04:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:50.929 23:04:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:50.929 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:50.929 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:50.929 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:50.929 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:50.929 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:50.929 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:50.929 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:50.929 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:50.929 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:50.929 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:50.929 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:50.929 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:50.929 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:50.929 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:50.929 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:50.929 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:50.929 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:50.929 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:50.929 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:50.929 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:50.929 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:50.929 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:50.929 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:50.929 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:50.929 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:50.929 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:50.929 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:50.929 ' 00:34:53.464 [2024-07-26 23:04:45.964658] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:54.839 [2024-07-26 23:04:47.184873] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:57.379 [2024-07-26 23:04:49.452110] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:59.284 [2024-07-26 23:04:51.426392] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:00.661 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:00.661 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:00.661 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:00.661 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:00.661 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:00.661 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:00.661 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:00.661 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:00.661 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:00.661 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:00.661 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:00.661 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:00.661 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:00.661 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:00.661 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:00.661 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:00.661 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:00.661 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:00.661 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:00.661 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:00.661 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:00.661 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:00.661 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:00.661 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:00.661 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:00.661 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:00.661 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:00.661 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:00.661 23:04:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:00.661 23:04:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:00.661 23:04:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:00.661 23:04:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:00.661 23:04:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:00.661 23:04:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:00.661 23:04:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:35:00.661 23:04:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:01.229 23:04:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:01.229 23:04:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:01.230 23:04:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:01.230 23:04:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:01.230 23:04:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:01.230 23:04:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:01.230 23:04:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:01.230 23:04:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:01.230 23:04:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:01.230 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:01.230 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:01.230 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:01.230 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:01.230 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:01.230 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:01.230 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:01.230 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:01.230 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:01.230 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:01.230 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:01.230 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:01.230 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:01.230 ' 00:35:06.497 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:06.497 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:06.497 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:06.497 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:06.497 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:06.497 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:06.497 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:06.497 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:06.497 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:06.497 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:06.497 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:06.497 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:06.497 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:06.497 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:06.497 23:04:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:06.497 23:04:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:06.497 23:04:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:06.497 23:04:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3701347 00:35:06.497 23:04:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 3701347 ']' 00:35:06.497 23:04:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 3701347 00:35:06.497 23:04:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:35:06.497 23:04:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:06.497 23:04:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3701347 00:35:06.497 23:04:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:06.497 23:04:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:06.497 23:04:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3701347' 00:35:06.497 killing process with pid 3701347 00:35:06.497 23:04:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 3701347 00:35:06.497 23:04:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 3701347 00:35:06.756 23:04:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:06.756 23:04:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:06.756 23:04:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3701347 ']' 00:35:06.756 23:04:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3701347 00:35:06.756 23:04:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 3701347 ']' 00:35:06.756 23:04:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 3701347 00:35:06.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3701347) - No such process 00:35:06.756 23:04:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 3701347 is not found' 00:35:06.756 Process with pid 3701347 is not found 00:35:06.756 23:04:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:06.756 23:04:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:06.756 23:04:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:06.756 00:35:06.756 real 0m16.089s 00:35:06.756 user 0m34.018s 00:35:06.756 sys 0m0.840s 00:35:06.756 23:04:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:06.756 23:04:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:06.756 ************************************ 00:35:06.756 END TEST spdkcli_nvmf_tcp 00:35:06.756 ************************************ 00:35:06.756 23:04:59 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:06.756 23:04:59 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:35:06.756 23:04:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:06.756 23:04:59 -- common/autotest_common.sh@10 -- # set +x 00:35:06.756 ************************************ 00:35:06.756 START TEST nvmf_identify_passthru 00:35:06.756 ************************************ 00:35:06.756 23:04:59 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:06.756 * Looking for test storage... 00:35:06.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:06.756 23:04:59 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:06.756 23:04:59 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:06.756 23:04:59 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:06.756 23:04:59 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:06.756 23:04:59 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:06.756 23:04:59 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:06.756 23:04:59 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:06.756 23:04:59 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:06.756 23:04:59 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:06.756 23:04:59 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:06.756 23:04:59 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:06.756 23:04:59 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:06.756 23:04:59 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:06.756 23:04:59 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:06.756 23:04:59 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:06.756 23:04:59 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:06.756 23:04:59 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:06.756 23:04:59 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:06.756 23:04:59 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:06.756 23:04:59 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:06.756 23:04:59 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:06.756 23:04:59 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:06.756 23:04:59 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.756 23:04:59 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.756 23:04:59 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.756 23:04:59 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:06.756 23:04:59 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.756 23:04:59 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:35:06.756 23:04:59 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:06.756 23:04:59 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:06.756 23:04:59 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:06.756 23:04:59 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:06.756 23:04:59 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:06.756 23:04:59 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:06.756 23:04:59 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:06.756 23:04:59 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:06.756 23:04:59 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:06.756 23:04:59 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:06.756 23:04:59 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:06.756 23:04:59 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:06.756 23:04:59 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.756 23:04:59 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.756 23:04:59 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.756 23:04:59 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:06.756 23:04:59 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.756 23:04:59 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:06.756 23:04:59 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:06.757 23:04:59 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:06.757 23:04:59 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:06.757 23:04:59 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:06.757 23:04:59 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:06.757 23:04:59 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:06.757 23:04:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:06.757 23:04:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:06.757 23:04:59 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:06.757 23:04:59 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:06.757 23:04:59 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:35:06.757 23:04:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:08.658 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:08.658 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:35:08.658 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:08.658 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:08.658 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:08.658 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:08.658 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:08.658 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:35:08.658 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:08.658 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:35:08.658 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:35:08.658 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:35:08.658 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:35:08.658 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:35:08.658 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:35:08.658 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:08.658 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:08.659 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:08.659 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:08.659 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:08.659 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:08.659 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:08.919 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:08.919 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:08.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:08.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:35:08.919 00:35:08.919 --- 10.0.0.2 ping statistics --- 00:35:08.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:08.919 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:35:08.919 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:08.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:08.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:35:08.919 00:35:08.919 --- 10.0.0.1 ping statistics --- 00:35:08.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:08.919 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:35:08.919 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:08.919 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:35:08.920 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:08.920 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:08.920 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:08.920 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:08.920 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:08.920 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:08.920 23:05:01 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:08.920 23:05:01 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:08.920 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:08.920 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:08.920 23:05:01 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:08.920 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:35:08.920 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:35:08.920 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:35:08.920 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:35:08.920 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:35:08.920 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:35:08.920 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:08.920 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:08.920 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:35:08.920 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:35:08.920 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:35:08.920 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:88:00.0 00:35:08.920 23:05:01 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:35:08.920 23:05:01 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:35:08.920 23:05:01 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:35:08.920 23:05:01 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:08.920 23:05:01 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:08.920 EAL: No free 2048 kB hugepages reported on node 1 00:35:13.117 23:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:35:13.117 23:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:35:13.117 23:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:13.117 23:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:13.117 EAL: No free 2048 kB hugepages reported on node 1 00:35:17.306 23:05:09 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:35:17.306 23:05:09 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:17.306 23:05:09 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:17.306 23:05:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:17.306 23:05:09 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:17.306 23:05:09 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:17.306 23:05:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:17.306 23:05:09 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3705838 00:35:17.306 23:05:09 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:17.306 23:05:09 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:17.306 23:05:09 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3705838 00:35:17.306 23:05:09 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 3705838 ']' 00:35:17.306 23:05:09 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:17.306 23:05:09 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:17.306 23:05:09 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:17.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:17.306 23:05:09 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:17.306 23:05:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:17.306 [2024-07-26 23:05:09.722923] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:35:17.306 [2024-07-26 23:05:09.723015] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:17.306 EAL: No free 2048 kB hugepages reported on node 1 00:35:17.306 [2024-07-26 23:05:09.786794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:17.564 [2024-07-26 23:05:09.871901] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:17.564 [2024-07-26 23:05:09.871952] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:17.564 [2024-07-26 23:05:09.871972] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:17.564 [2024-07-26 23:05:09.871984] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:17.564 [2024-07-26 23:05:09.871994] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:17.564 [2024-07-26 23:05:09.872066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:17.565 [2024-07-26 23:05:09.872123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:17.565 [2024-07-26 23:05:09.872150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:35:17.565 [2024-07-26 23:05:09.872152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:17.565 23:05:09 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:17.565 23:05:09 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:35:17.565 23:05:09 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:17.565 23:05:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.565 23:05:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:17.565 INFO: Log level set to 20 00:35:17.565 INFO: Requests: 00:35:17.565 { 00:35:17.565 "jsonrpc": "2.0", 00:35:17.565 "method": "nvmf_set_config", 00:35:17.565 "id": 1, 00:35:17.565 "params": { 00:35:17.565 "admin_cmd_passthru": { 00:35:17.565 "identify_ctrlr": true 00:35:17.565 } 00:35:17.565 } 00:35:17.565 } 00:35:17.565 00:35:17.565 INFO: response: 00:35:17.565 { 00:35:17.565 "jsonrpc": "2.0", 00:35:17.565 "id": 1, 00:35:17.565 "result": true 00:35:17.565 } 00:35:17.565 00:35:17.565 23:05:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.565 23:05:09 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:17.565 23:05:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.565 23:05:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:17.565 INFO: Setting log level to 20 00:35:17.565 INFO: Setting log level to 20 00:35:17.565 INFO: Log level set to 20 00:35:17.565 INFO: Log level set to 20 00:35:17.565 INFO: Requests: 00:35:17.565 { 00:35:17.565 "jsonrpc": "2.0", 00:35:17.565 "method": "framework_start_init", 00:35:17.565 "id": 1 00:35:17.565 } 00:35:17.565 00:35:17.565 INFO: Requests: 00:35:17.565 { 00:35:17.565 "jsonrpc": "2.0", 00:35:17.565 "method": "framework_start_init", 00:35:17.565 "id": 1 00:35:17.565 } 00:35:17.565 00:35:17.565 [2024-07-26 23:05:10.042296] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:17.565 INFO: response: 00:35:17.565 { 00:35:17.565 "jsonrpc": "2.0", 00:35:17.565 "id": 1, 00:35:17.565 "result": true 00:35:17.565 } 00:35:17.565 00:35:17.565 INFO: response: 00:35:17.565 { 00:35:17.565 "jsonrpc": "2.0", 00:35:17.565 "id": 1, 00:35:17.565 "result": true 00:35:17.565 } 00:35:17.565 00:35:17.565 23:05:10 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.565 23:05:10 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:17.565 23:05:10 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.565 23:05:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:17.565 INFO: Setting log level to 40 00:35:17.565 INFO: Setting log level to 40 00:35:17.565 INFO: Setting log level to 40 00:35:17.565 [2024-07-26 23:05:10.052228] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:17.565 23:05:10 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.565 23:05:10 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:17.565 23:05:10 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:17.565 23:05:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:17.823 23:05:10 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:35:17.823 23:05:10 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.823 23:05:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:21.148 Nvme0n1 00:35:21.148 23:05:12 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.149 23:05:12 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:21.149 23:05:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.149 23:05:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:21.149 23:05:12 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.149 23:05:12 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:21.149 23:05:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.149 23:05:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:21.149 23:05:12 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.149 23:05:12 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:21.149 23:05:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.149 23:05:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:21.149 [2024-07-26 23:05:12.943435] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:21.149 23:05:12 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.149 23:05:12 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:21.149 23:05:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.149 23:05:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:21.149 [ 00:35:21.149 { 00:35:21.149 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:21.149 "subtype": "Discovery", 00:35:21.149 "listen_addresses": [], 00:35:21.149 "allow_any_host": true, 00:35:21.149 "hosts": [] 00:35:21.149 }, 00:35:21.149 { 00:35:21.149 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:21.149 "subtype": "NVMe", 00:35:21.149 "listen_addresses": [ 00:35:21.149 { 00:35:21.149 "trtype": "TCP", 00:35:21.149 "adrfam": "IPv4", 00:35:21.149 "traddr": "10.0.0.2", 00:35:21.149 "trsvcid": "4420" 00:35:21.149 } 00:35:21.149 ], 00:35:21.149 "allow_any_host": true, 00:35:21.149 "hosts": [], 00:35:21.149 "serial_number": "SPDK00000000000001", 00:35:21.149 "model_number": "SPDK bdev Controller", 00:35:21.149 "max_namespaces": 1, 00:35:21.149 "min_cntlid": 1, 00:35:21.149 "max_cntlid": 65519, 00:35:21.149 "namespaces": [ 00:35:21.149 { 00:35:21.149 "nsid": 1, 00:35:21.149 "bdev_name": "Nvme0n1", 00:35:21.149 "name": "Nvme0n1", 00:35:21.149 "nguid": "640AD93927714F80B5DE7411BF19D98A", 00:35:21.149 "uuid": "640ad939-2771-4f80-b5de-7411bf19d98a" 00:35:21.149 } 00:35:21.149 ] 00:35:21.149 } 00:35:21.149 ] 00:35:21.149 23:05:12 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.149 23:05:12 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:21.149 23:05:12 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:21.149 23:05:12 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:21.149 EAL: No free 2048 kB hugepages reported on node 1 00:35:21.149 23:05:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:35:21.149 23:05:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:21.149 23:05:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:21.149 23:05:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:21.149 EAL: No free 2048 kB hugepages reported on node 1 00:35:21.149 23:05:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:35:21.149 23:05:13 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:35:21.149 23:05:13 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:35:21.149 23:05:13 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:21.149 23:05:13 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.149 23:05:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:21.149 23:05:13 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.149 23:05:13 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:21.149 23:05:13 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:21.149 23:05:13 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:21.149 23:05:13 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:35:21.149 23:05:13 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:21.149 23:05:13 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:35:21.149 23:05:13 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:21.149 23:05:13 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:21.149 rmmod nvme_tcp 00:35:21.149 rmmod nvme_fabrics 00:35:21.149 rmmod nvme_keyring 00:35:21.149 23:05:13 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:21.149 23:05:13 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:35:21.149 23:05:13 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:35:21.149 23:05:13 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3705838 ']' 00:35:21.149 23:05:13 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3705838 00:35:21.149 23:05:13 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 3705838 ']' 00:35:21.149 23:05:13 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 3705838 00:35:21.149 23:05:13 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:35:21.149 23:05:13 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:21.149 23:05:13 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3705838 00:35:21.149 23:05:13 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:21.149 23:05:13 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:21.149 23:05:13 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3705838' 00:35:21.149 killing process with pid 3705838 00:35:21.149 23:05:13 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 3705838 00:35:21.149 23:05:13 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 3705838 00:35:22.525 23:05:14 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:22.525 23:05:14 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:22.525 23:05:14 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:22.525 23:05:14 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:22.525 23:05:14 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:22.525 23:05:14 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:22.525 23:05:14 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:22.525 23:05:14 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:25.063 23:05:16 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:25.063 00:35:25.063 real 0m17.822s 00:35:25.063 user 0m26.509s 00:35:25.063 sys 0m2.231s 00:35:25.063 23:05:16 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:25.063 23:05:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:25.063 ************************************ 00:35:25.063 END TEST nvmf_identify_passthru 00:35:25.063 ************************************ 00:35:25.063 23:05:16 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:25.063 23:05:16 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:25.063 23:05:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:25.063 23:05:16 -- common/autotest_common.sh@10 -- # set +x 00:35:25.063 ************************************ 00:35:25.063 START TEST nvmf_dif 00:35:25.063 ************************************ 00:35:25.063 23:05:17 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:25.063 * Looking for test storage... 00:35:25.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:25.063 23:05:17 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:25.063 23:05:17 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:25.063 23:05:17 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:25.063 23:05:17 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:25.063 23:05:17 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:25.063 23:05:17 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:25.063 23:05:17 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:25.063 23:05:17 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:25.063 23:05:17 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:25.063 23:05:17 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:25.063 23:05:17 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:25.063 23:05:17 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:25.063 23:05:17 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:25.063 23:05:17 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:25.063 23:05:17 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:25.063 23:05:17 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:25.063 23:05:17 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:25.063 23:05:17 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:25.063 23:05:17 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:25.063 23:05:17 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:25.063 23:05:17 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:25.063 23:05:17 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:25.063 23:05:17 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.063 23:05:17 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.063 23:05:17 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.063 23:05:17 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:25.063 23:05:17 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.063 23:05:17 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:35:25.063 23:05:17 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:25.063 23:05:17 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:25.063 23:05:17 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:25.063 23:05:17 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:25.063 23:05:17 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:25.063 23:05:17 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:25.063 23:05:17 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:25.063 23:05:17 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:25.063 23:05:17 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:25.063 23:05:17 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:25.063 23:05:17 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:25.063 23:05:17 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:25.063 23:05:17 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:25.063 23:05:17 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:25.063 23:05:17 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:25.063 23:05:17 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:25.063 23:05:17 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:25.063 23:05:17 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:25.063 23:05:17 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:25.063 23:05:17 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:25.063 23:05:17 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:25.063 23:05:17 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:25.063 23:05:17 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:25.063 23:05:17 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:35:25.063 23:05:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:26.438 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:26.438 23:05:18 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:26.438 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:26.439 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:26.439 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:26.439 23:05:18 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:26.698 23:05:18 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:26.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:26.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:35:26.698 00:35:26.698 --- 10.0.0.2 ping statistics --- 00:35:26.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:26.699 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:35:26.699 23:05:18 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:26.699 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:26.699 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:35:26.699 00:35:26.699 --- 10.0.0.1 ping statistics --- 00:35:26.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:26.699 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:35:26.699 23:05:18 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:26.699 23:05:18 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:35:26.699 23:05:18 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:26.699 23:05:18 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:27.632 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:27.632 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:27.632 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:27.632 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:27.632 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:27.632 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:27.632 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:27.632 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:27.632 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:27.632 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:27.632 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:27.632 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:27.632 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:27.632 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:27.632 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:27.632 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:27.632 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:27.890 23:05:20 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:27.890 23:05:20 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:27.890 23:05:20 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:27.890 23:05:20 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:27.890 23:05:20 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:27.890 23:05:20 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:27.890 23:05:20 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:27.890 23:05:20 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:27.890 23:05:20 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:27.890 23:05:20 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:27.890 23:05:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:27.890 23:05:20 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3708978 00:35:27.890 23:05:20 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:27.890 23:05:20 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3708978 00:35:27.890 23:05:20 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 3708978 ']' 00:35:27.890 23:05:20 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:27.890 23:05:20 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:27.890 23:05:20 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:27.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:27.890 23:05:20 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:27.890 23:05:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:27.890 [2024-07-26 23:05:20.210029] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:35:27.890 [2024-07-26 23:05:20.210106] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:27.890 EAL: No free 2048 kB hugepages reported on node 1 00:35:27.890 [2024-07-26 23:05:20.290504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:27.890 [2024-07-26 23:05:20.385110] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:27.890 [2024-07-26 23:05:20.385177] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:27.890 [2024-07-26 23:05:20.385212] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:27.890 [2024-07-26 23:05:20.385234] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:27.890 [2024-07-26 23:05:20.385252] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:27.890 [2024-07-26 23:05:20.385290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:28.148 23:05:20 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:28.148 23:05:20 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:35:28.148 23:05:20 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:28.148 23:05:20 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:28.148 23:05:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:28.148 23:05:20 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:28.148 23:05:20 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:28.148 23:05:20 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:28.148 23:05:20 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.148 23:05:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:28.148 [2024-07-26 23:05:20.572184] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:28.148 23:05:20 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.148 23:05:20 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:28.148 23:05:20 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:28.148 23:05:20 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:28.148 23:05:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:28.148 ************************************ 00:35:28.148 START TEST fio_dif_1_default 00:35:28.148 ************************************ 00:35:28.148 23:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:35:28.148 23:05:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:28.148 23:05:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:28.148 23:05:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:28.148 23:05:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:28.148 23:05:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:28.148 23:05:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:28.148 23:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.148 23:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:28.148 bdev_null0 00:35:28.148 23:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.148 23:05:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:28.148 23:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.148 23:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:28.148 23:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.148 23:05:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:28.148 23:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.148 23:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:28.148 23:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.148 23:05:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:28.148 23:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.148 23:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:28.148 [2024-07-26 23:05:20.628425] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:28.148 23:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.148 23:05:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:28.148 23:05:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:28.148 23:05:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:28.148 23:05:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:35:28.148 23:05:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:35:28.148 23:05:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:28.148 23:05:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:28.148 23:05:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:28.148 { 00:35:28.148 "params": { 00:35:28.148 "name": "Nvme$subsystem", 00:35:28.148 "trtype": "$TEST_TRANSPORT", 00:35:28.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:28.148 "adrfam": "ipv4", 00:35:28.148 "trsvcid": "$NVMF_PORT", 00:35:28.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:28.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:28.148 "hdgst": ${hdgst:-false}, 00:35:28.148 "ddgst": ${ddgst:-false} 00:35:28.148 }, 00:35:28.148 "method": "bdev_nvme_attach_controller" 00:35:28.148 } 00:35:28.148 EOF 00:35:28.148 )") 00:35:28.148 23:05:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:28.148 23:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:28.148 23:05:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:28.148 23:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:28.149 23:05:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:28.149 23:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:28.149 23:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:28.149 23:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:28.149 23:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:35:28.149 23:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:28.149 23:05:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:35:28.149 23:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:28.149 23:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:28.149 23:05:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:28.149 23:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:35:28.149 23:05:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:28.149 23:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:28.149 23:05:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:35:28.149 23:05:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:35:28.149 23:05:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:28.149 "params": { 00:35:28.149 "name": "Nvme0", 00:35:28.149 "trtype": "tcp", 00:35:28.149 "traddr": "10.0.0.2", 00:35:28.149 "adrfam": "ipv4", 00:35:28.149 "trsvcid": "4420", 00:35:28.149 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:28.149 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:28.149 "hdgst": false, 00:35:28.149 "ddgst": false 00:35:28.149 }, 00:35:28.149 "method": "bdev_nvme_attach_controller" 00:35:28.149 }' 00:35:28.407 23:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:28.407 23:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:28.407 23:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:28.407 23:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:28.407 23:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:28.407 23:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:28.407 23:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:28.407 23:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:28.407 23:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:28.407 23:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:28.407 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:28.407 fio-3.35 00:35:28.407 Starting 1 thread 00:35:28.407 EAL: No free 2048 kB hugepages reported on node 1 00:35:40.612 00:35:40.612 filename0: (groupid=0, jobs=1): err= 0: pid=3709204: Fri Jul 26 23:05:31 2024 00:35:40.612 read: IOPS=189, BW=757KiB/s (775kB/s)(7568KiB/10001msec) 00:35:40.612 slat (nsec): min=4915, max=29915, avg=9286.08, stdev=2409.28 00:35:40.612 clat (usec): min=779, max=46550, avg=21113.96, stdev=20183.85 00:35:40.612 lat (usec): min=787, max=46564, avg=21123.25, stdev=20183.80 00:35:40.612 clat percentiles (usec): 00:35:40.612 | 1.00th=[ 840], 5.00th=[ 857], 10.00th=[ 865], 20.00th=[ 889], 00:35:40.612 | 30.00th=[ 906], 40.00th=[ 922], 50.00th=[41157], 60.00th=[41157], 00:35:40.612 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:35:40.612 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:35:40.612 | 99.99th=[46400] 00:35:40.612 bw ( KiB/s): min= 672, max= 768, per=99.90%, avg=756.21, stdev=28.64, samples=19 00:35:40.612 iops : min= 168, max= 192, avg=189.05, stdev= 7.16, samples=19 00:35:40.612 lat (usec) : 1000=49.68% 00:35:40.612 lat (msec) : 2=0.21%, 50=50.11% 00:35:40.612 cpu : usr=90.09%, sys=9.65%, ctx=14, majf=0, minf=227 00:35:40.612 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:40.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.612 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.612 issued rwts: total=1892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:40.612 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:40.612 00:35:40.612 Run status group 0 (all jobs): 00:35:40.612 READ: bw=757KiB/s (775kB/s), 757KiB/s-757KiB/s (775kB/s-775kB/s), io=7568KiB (7750kB), run=10001-10001msec 00:35:40.612 23:05:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:40.612 23:05:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.613 00:35:40.613 real 0m11.081s 00:35:40.613 user 0m10.049s 00:35:40.613 sys 0m1.230s 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:40.613 ************************************ 00:35:40.613 END TEST fio_dif_1_default 00:35:40.613 ************************************ 00:35:40.613 23:05:31 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:40.613 23:05:31 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:40.613 23:05:31 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:40.613 23:05:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:40.613 ************************************ 00:35:40.613 START TEST fio_dif_1_multi_subsystems 00:35:40.613 ************************************ 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:40.613 bdev_null0 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:40.613 [2024-07-26 23:05:31.751731] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:40.613 bdev_null1 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:40.613 { 00:35:40.613 "params": { 00:35:40.613 "name": "Nvme$subsystem", 00:35:40.613 "trtype": "$TEST_TRANSPORT", 00:35:40.613 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:40.613 "adrfam": "ipv4", 00:35:40.613 "trsvcid": "$NVMF_PORT", 00:35:40.613 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:40.613 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:40.613 "hdgst": ${hdgst:-false}, 00:35:40.613 "ddgst": ${ddgst:-false} 00:35:40.613 }, 00:35:40.613 "method": "bdev_nvme_attach_controller" 00:35:40.613 } 00:35:40.613 EOF 00:35:40.613 )") 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:40.613 { 00:35:40.613 "params": { 00:35:40.613 "name": "Nvme$subsystem", 00:35:40.613 "trtype": "$TEST_TRANSPORT", 00:35:40.613 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:40.613 "adrfam": "ipv4", 00:35:40.613 "trsvcid": "$NVMF_PORT", 00:35:40.613 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:40.613 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:40.613 "hdgst": ${hdgst:-false}, 00:35:40.613 "ddgst": ${ddgst:-false} 00:35:40.613 }, 00:35:40.613 "method": "bdev_nvme_attach_controller" 00:35:40.613 } 00:35:40.613 EOF 00:35:40.613 )") 00:35:40.613 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:40.614 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:40.614 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:40.614 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:35:40.614 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:35:40.614 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:40.614 "params": { 00:35:40.614 "name": "Nvme0", 00:35:40.614 "trtype": "tcp", 00:35:40.614 "traddr": "10.0.0.2", 00:35:40.614 "adrfam": "ipv4", 00:35:40.614 "trsvcid": "4420", 00:35:40.614 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:40.614 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:40.614 "hdgst": false, 00:35:40.614 "ddgst": false 00:35:40.614 }, 00:35:40.614 "method": "bdev_nvme_attach_controller" 00:35:40.614 },{ 00:35:40.614 "params": { 00:35:40.614 "name": "Nvme1", 00:35:40.614 "trtype": "tcp", 00:35:40.614 "traddr": "10.0.0.2", 00:35:40.614 "adrfam": "ipv4", 00:35:40.614 "trsvcid": "4420", 00:35:40.614 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:40.614 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:40.614 "hdgst": false, 00:35:40.614 "ddgst": false 00:35:40.614 }, 00:35:40.614 "method": "bdev_nvme_attach_controller" 00:35:40.614 }' 00:35:40.614 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:40.614 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:40.614 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:40.614 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:40.614 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:40.614 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:40.614 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:40.614 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:40.614 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:40.614 23:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:40.614 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:40.614 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:40.614 fio-3.35 00:35:40.614 Starting 2 threads 00:35:40.614 EAL: No free 2048 kB hugepages reported on node 1 00:35:50.601 00:35:50.601 filename0: (groupid=0, jobs=1): err= 0: pid=3710602: Fri Jul 26 23:05:42 2024 00:35:50.601 read: IOPS=141, BW=565KiB/s (579kB/s)(5664KiB/10016msec) 00:35:50.601 slat (nsec): min=5195, max=71407, avg=9706.12, stdev=2972.43 00:35:50.601 clat (usec): min=792, max=44972, avg=28261.07, stdev=18892.09 00:35:50.601 lat (usec): min=800, max=44985, avg=28270.77, stdev=18891.91 00:35:50.601 clat percentiles (usec): 00:35:50.601 | 1.00th=[ 816], 5.00th=[ 832], 10.00th=[ 848], 20.00th=[ 865], 00:35:50.601 | 30.00th=[ 922], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:50.601 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:35:50.601 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:35:50.601 | 99.99th=[44827] 00:35:50.601 bw ( KiB/s): min= 384, max= 768, per=42.96%, avg=564.80, stdev=179.61, samples=20 00:35:50.601 iops : min= 96, max= 192, avg=141.20, stdev=44.90, samples=20 00:35:50.601 lat (usec) : 1000=32.06% 00:35:50.601 lat (msec) : 2=0.14%, 50=67.80% 00:35:50.601 cpu : usr=94.50%, sys=5.20%, ctx=15, majf=0, minf=134 00:35:50.601 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:50.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.601 issued rwts: total=1416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:50.601 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:50.601 filename1: (groupid=0, jobs=1): err= 0: pid=3710603: Fri Jul 26 23:05:42 2024 00:35:50.601 read: IOPS=186, BW=747KiB/s (765kB/s)(7488KiB/10018msec) 00:35:50.601 slat (nsec): min=5146, max=29487, avg=9545.28, stdev=2296.97 00:35:50.601 clat (usec): min=830, max=46018, avg=21374.65, stdev=20278.22 00:35:50.601 lat (usec): min=839, max=46047, avg=21384.19, stdev=20278.09 00:35:50.601 clat percentiles (usec): 00:35:50.601 | 1.00th=[ 857], 5.00th=[ 873], 10.00th=[ 889], 20.00th=[ 906], 00:35:50.601 | 30.00th=[ 922], 40.00th=[ 963], 50.00th=[41157], 60.00th=[41157], 00:35:50.601 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:35:50.601 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:35:50.601 | 99.99th=[45876] 00:35:50.601 bw ( KiB/s): min= 640, max= 768, per=56.90%, avg=747.20, stdev=39.23, samples=20 00:35:50.601 iops : min= 160, max= 192, avg=186.80, stdev= 9.81, samples=20 00:35:50.601 lat (usec) : 1000=45.94% 00:35:50.601 lat (msec) : 2=3.63%, 50=50.43% 00:35:50.601 cpu : usr=93.99%, sys=5.71%, ctx=14, majf=0, minf=128 00:35:50.601 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:50.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.601 issued rwts: total=1872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:50.601 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:50.601 00:35:50.601 Run status group 0 (all jobs): 00:35:50.601 READ: bw=1313KiB/s (1344kB/s), 565KiB/s-747KiB/s (579kB/s-765kB/s), io=12.8MiB (13.5MB), run=10016-10018msec 00:35:50.860 23:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:50.860 23:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:50.860 23:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:50.860 23:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:50.860 23:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:50.860 23:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:50.860 23:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.860 23:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:50.860 23:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.860 23:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:50.860 23:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.860 23:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:50.860 23:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.860 23:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:50.860 23:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:50.860 23:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:50.860 23:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:50.860 23:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.860 23:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:50.860 23:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.860 23:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:50.860 23:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.860 23:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:50.860 23:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.860 00:35:50.860 real 0m11.483s 00:35:50.860 user 0m20.459s 00:35:50.860 sys 0m1.401s 00:35:50.860 23:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:50.860 23:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:50.860 ************************************ 00:35:50.860 END TEST fio_dif_1_multi_subsystems 00:35:50.860 ************************************ 00:35:50.860 23:05:43 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:50.860 23:05:43 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:50.860 23:05:43 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:50.860 23:05:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:50.860 ************************************ 00:35:50.860 START TEST fio_dif_rand_params 00:35:50.860 ************************************ 00:35:50.860 23:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:35:50.860 23:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:50.860 23:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:50.860 23:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:50.860 23:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:50.860 23:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:50.860 23:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:50.860 23:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:50.860 23:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:50.860 23:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:50.860 23:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:50.860 23:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:50.860 23:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:50.860 23:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:50.860 23:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.860 23:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:50.860 bdev_null0 00:35:50.860 23:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.860 23:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:50.860 23:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.860 23:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:50.860 23:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.860 23:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:50.860 23:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.860 23:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:50.860 23:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.860 23:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:50.860 23:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.860 23:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:50.860 [2024-07-26 23:05:43.283886] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:50.860 23:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.860 23:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:50.860 23:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:50.860 23:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:50.860 23:05:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:50.860 23:05:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:50.860 23:05:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:50.860 23:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:50.860 23:05:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:50.860 { 00:35:50.860 "params": { 00:35:50.860 "name": "Nvme$subsystem", 00:35:50.860 "trtype": "$TEST_TRANSPORT", 00:35:50.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:50.860 "adrfam": "ipv4", 00:35:50.860 "trsvcid": "$NVMF_PORT", 00:35:50.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:50.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:50.860 "hdgst": ${hdgst:-false}, 00:35:50.860 "ddgst": ${ddgst:-false} 00:35:50.860 }, 00:35:50.860 "method": "bdev_nvme_attach_controller" 00:35:50.860 } 00:35:50.860 EOF 00:35:50.860 )") 00:35:50.861 23:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:50.861 23:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:50.861 23:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:50.861 23:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:50.861 23:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:50.861 23:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:50.861 23:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:50.861 23:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:50.861 23:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:50.861 23:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:50.861 23:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:50.861 23:05:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:50.861 23:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:50.861 23:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:50.861 23:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:50.861 23:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:50.861 23:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:50.861 23:05:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:50.861 23:05:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:50.861 23:05:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:50.861 "params": { 00:35:50.861 "name": "Nvme0", 00:35:50.861 "trtype": "tcp", 00:35:50.861 "traddr": "10.0.0.2", 00:35:50.861 "adrfam": "ipv4", 00:35:50.861 "trsvcid": "4420", 00:35:50.861 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:50.861 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:50.861 "hdgst": false, 00:35:50.861 "ddgst": false 00:35:50.861 }, 00:35:50.861 "method": "bdev_nvme_attach_controller" 00:35:50.861 }' 00:35:50.861 23:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:50.861 23:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:50.861 23:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:50.861 23:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:50.861 23:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:50.861 23:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:50.861 23:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:50.861 23:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:50.861 23:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:50.861 23:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:51.118 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:51.118 ... 00:35:51.118 fio-3.35 00:35:51.118 Starting 3 threads 00:35:51.118 EAL: No free 2048 kB hugepages reported on node 1 00:35:57.710 00:35:57.710 filename0: (groupid=0, jobs=1): err= 0: pid=3711999: Fri Jul 26 23:05:49 2024 00:35:57.710 read: IOPS=208, BW=26.1MiB/s (27.4MB/s)(131MiB/5006msec) 00:35:57.710 slat (nsec): min=4589, max=37940, avg=13258.91, stdev=2731.50 00:35:57.710 clat (usec): min=4938, max=93109, avg=14337.64, stdev=12752.81 00:35:57.710 lat (usec): min=4952, max=93123, avg=14350.90, stdev=12752.67 00:35:57.710 clat percentiles (usec): 00:35:57.710 | 1.00th=[ 5997], 5.00th=[ 7111], 10.00th=[ 7963], 20.00th=[ 8717], 00:35:57.710 | 30.00th=[ 9241], 40.00th=[ 9896], 50.00th=[10552], 60.00th=[11076], 00:35:57.710 | 70.00th=[11600], 80.00th=[12387], 90.00th=[16909], 95.00th=[51643], 00:35:57.710 | 99.00th=[54264], 99.50th=[54789], 99.90th=[56361], 99.95th=[92799], 00:35:57.710 | 99.99th=[92799] 00:35:57.710 bw ( KiB/s): min=20480, max=30976, per=34.93%, avg=26705.10, stdev=4079.35, samples=10 00:35:57.710 iops : min= 160, max= 242, avg=208.60, stdev=31.92, samples=10 00:35:57.710 lat (msec) : 10=41.97%, 20=48.09%, 50=1.53%, 100=8.41% 00:35:57.710 cpu : usr=90.97%, sys=6.93%, ctx=335, majf=0, minf=79 00:35:57.710 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:57.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:57.710 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:57.710 issued rwts: total=1046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:57.710 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:57.710 filename0: (groupid=0, jobs=1): err= 0: pid=3712000: Fri Jul 26 23:05:49 2024 00:35:57.710 read: IOPS=154, BW=19.3MiB/s (20.3MB/s)(97.2MiB/5034msec) 00:35:57.710 slat (nsec): min=4273, max=30371, avg=13145.59, stdev=2053.72 00:35:57.710 clat (msec): min=5, max=101, avg=19.39, stdev=16.47 00:35:57.710 lat (msec): min=5, max=101, avg=19.40, stdev=16.47 00:35:57.710 clat percentiles (msec): 00:35:57.710 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 11], 00:35:57.710 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 16], 00:35:57.710 | 70.00th=[ 18], 80.00th=[ 21], 90.00th=[ 54], 95.00th=[ 58], 00:35:57.710 | 99.00th=[ 67], 99.50th=[ 96], 99.90th=[ 102], 99.95th=[ 102], 00:35:57.710 | 99.99th=[ 102] 00:35:57.710 bw ( KiB/s): min=14592, max=24576, per=25.95%, avg=19840.00, stdev=3568.22, samples=10 00:35:57.710 iops : min= 114, max= 192, avg=155.00, stdev=27.88, samples=10 00:35:57.710 lat (msec) : 10=17.61%, 20=60.41%, 50=9.25%, 100=12.47%, 250=0.26% 00:35:57.710 cpu : usr=94.10%, sys=5.48%, ctx=10, majf=0, minf=57 00:35:57.710 IO depths : 1=1.2%, 2=98.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:57.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:57.710 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:57.710 issued rwts: total=778,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:57.710 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:57.710 filename0: (groupid=0, jobs=1): err= 0: pid=3712001: Fri Jul 26 23:05:49 2024 00:35:57.710 read: IOPS=236, BW=29.5MiB/s (31.0MB/s)(148MiB/5005msec) 00:35:57.710 slat (nsec): min=4837, max=42124, avg=15167.54, stdev=3429.36 00:35:57.710 clat (usec): min=5119, max=97289, avg=12666.81, stdev=10514.45 00:35:57.710 lat (usec): min=5132, max=97307, avg=12681.98, stdev=10514.69 00:35:57.710 clat percentiles (usec): 00:35:57.710 | 1.00th=[ 5735], 5.00th=[ 6456], 10.00th=[ 7046], 20.00th=[ 8094], 00:35:57.710 | 30.00th=[ 8717], 40.00th=[ 9503], 50.00th=[10159], 60.00th=[11207], 00:35:57.710 | 70.00th=[11994], 80.00th=[12649], 90.00th=[13829], 95.00th=[50070], 00:35:57.710 | 99.00th=[54789], 99.50th=[55313], 99.90th=[55837], 99.95th=[96994], 00:35:57.710 | 99.99th=[96994] 00:35:57.711 bw ( KiB/s): min=20736, max=39424, per=39.51%, avg=30212.10, stdev=7007.82, samples=10 00:35:57.711 iops : min= 162, max= 308, avg=236.00, stdev=54.80, samples=10 00:35:57.711 lat (msec) : 10=47.59%, 20=46.41%, 50=1.01%, 100=4.99% 00:35:57.711 cpu : usr=92.11%, sys=7.19%, ctx=10, majf=0, minf=128 00:35:57.711 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:57.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:57.711 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:57.711 issued rwts: total=1183,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:57.711 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:57.711 00:35:57.711 Run status group 0 (all jobs): 00:35:57.711 READ: bw=74.7MiB/s (78.3MB/s), 19.3MiB/s-29.5MiB/s (20.3MB/s-31.0MB/s), io=376MiB (394MB), run=5005-5034msec 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.711 bdev_null0 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.711 [2024-07-26 23:05:49.280269] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.711 bdev_null1 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.711 bdev_null2 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:57.711 { 00:35:57.711 "params": { 00:35:57.711 "name": "Nvme$subsystem", 00:35:57.711 "trtype": "$TEST_TRANSPORT", 00:35:57.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:57.711 "adrfam": "ipv4", 00:35:57.711 "trsvcid": "$NVMF_PORT", 00:35:57.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:57.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:57.711 "hdgst": ${hdgst:-false}, 00:35:57.711 "ddgst": ${ddgst:-false} 00:35:57.711 }, 00:35:57.711 "method": "bdev_nvme_attach_controller" 00:35:57.711 } 00:35:57.711 EOF 00:35:57.711 )") 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:57.711 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:57.712 { 00:35:57.712 "params": { 00:35:57.712 "name": "Nvme$subsystem", 00:35:57.712 "trtype": "$TEST_TRANSPORT", 00:35:57.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:57.712 "adrfam": "ipv4", 00:35:57.712 "trsvcid": "$NVMF_PORT", 00:35:57.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:57.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:57.712 "hdgst": ${hdgst:-false}, 00:35:57.712 "ddgst": ${ddgst:-false} 00:35:57.712 }, 00:35:57.712 "method": "bdev_nvme_attach_controller" 00:35:57.712 } 00:35:57.712 EOF 00:35:57.712 )") 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:57.712 { 00:35:57.712 "params": { 00:35:57.712 "name": "Nvme$subsystem", 00:35:57.712 "trtype": "$TEST_TRANSPORT", 00:35:57.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:57.712 "adrfam": "ipv4", 00:35:57.712 "trsvcid": "$NVMF_PORT", 00:35:57.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:57.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:57.712 "hdgst": ${hdgst:-false}, 00:35:57.712 "ddgst": ${ddgst:-false} 00:35:57.712 }, 00:35:57.712 "method": "bdev_nvme_attach_controller" 00:35:57.712 } 00:35:57.712 EOF 00:35:57.712 )") 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:57.712 "params": { 00:35:57.712 "name": "Nvme0", 00:35:57.712 "trtype": "tcp", 00:35:57.712 "traddr": "10.0.0.2", 00:35:57.712 "adrfam": "ipv4", 00:35:57.712 "trsvcid": "4420", 00:35:57.712 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:57.712 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:57.712 "hdgst": false, 00:35:57.712 "ddgst": false 00:35:57.712 }, 00:35:57.712 "method": "bdev_nvme_attach_controller" 00:35:57.712 },{ 00:35:57.712 "params": { 00:35:57.712 "name": "Nvme1", 00:35:57.712 "trtype": "tcp", 00:35:57.712 "traddr": "10.0.0.2", 00:35:57.712 "adrfam": "ipv4", 00:35:57.712 "trsvcid": "4420", 00:35:57.712 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:57.712 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:57.712 "hdgst": false, 00:35:57.712 "ddgst": false 00:35:57.712 }, 00:35:57.712 "method": "bdev_nvme_attach_controller" 00:35:57.712 },{ 00:35:57.712 "params": { 00:35:57.712 "name": "Nvme2", 00:35:57.712 "trtype": "tcp", 00:35:57.712 "traddr": "10.0.0.2", 00:35:57.712 "adrfam": "ipv4", 00:35:57.712 "trsvcid": "4420", 00:35:57.712 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:57.712 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:57.712 "hdgst": false, 00:35:57.712 "ddgst": false 00:35:57.712 }, 00:35:57.712 "method": "bdev_nvme_attach_controller" 00:35:57.712 }' 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:57.712 23:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:57.712 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:57.712 ... 00:35:57.712 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:57.712 ... 00:35:57.712 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:57.712 ... 00:35:57.712 fio-3.35 00:35:57.712 Starting 24 threads 00:35:57.712 EAL: No free 2048 kB hugepages reported on node 1 00:36:09.914 00:36:09.914 filename0: (groupid=0, jobs=1): err= 0: pid=3712858: Fri Jul 26 23:06:00 2024 00:36:09.914 read: IOPS=49, BW=198KiB/s (203kB/s)(1984KiB/10027msec) 00:36:09.914 slat (usec): min=8, max=104, avg=53.60, stdev=31.09 00:36:09.914 clat (msec): min=173, max=506, avg=322.96, stdev=55.18 00:36:09.914 lat (msec): min=173, max=506, avg=323.02, stdev=55.17 00:36:09.914 clat percentiles (msec): 00:36:09.914 | 1.00th=[ 211], 5.00th=[ 218], 10.00th=[ 245], 20.00th=[ 275], 00:36:09.914 | 30.00th=[ 305], 40.00th=[ 317], 50.00th=[ 342], 60.00th=[ 347], 00:36:09.914 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 368], 95.00th=[ 384], 00:36:09.914 | 99.00th=[ 485], 99.50th=[ 489], 99.90th=[ 506], 99.95th=[ 506], 00:36:09.914 | 99.99th=[ 506] 00:36:09.914 bw ( KiB/s): min= 128, max= 256, per=3.32%, avg=192.00, stdev=62.72, samples=20 00:36:09.914 iops : min= 32, max= 64, avg=48.00, stdev=15.68, samples=20 00:36:09.914 lat (msec) : 250=11.29%, 500=88.31%, 750=0.40% 00:36:09.914 cpu : usr=97.56%, sys=1.65%, ctx=45, majf=0, minf=47 00:36:09.914 IO depths : 1=5.0%, 2=11.3%, 4=25.0%, 8=51.2%, 16=7.5%, 32=0.0%, >=64=0.0% 00:36:09.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.914 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.914 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.914 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.914 filename0: (groupid=0, jobs=1): err= 0: pid=3712859: Fri Jul 26 23:06:00 2024 00:36:09.914 read: IOPS=74, BW=297KiB/s (304kB/s)(3008KiB/10134msec) 00:36:09.914 slat (nsec): min=8355, max=48017, avg=12536.61, stdev=5486.52 00:36:09.914 clat (msec): min=134, max=306, avg=215.22, stdev=32.74 00:36:09.914 lat (msec): min=134, max=306, avg=215.24, stdev=32.74 00:36:09.914 clat percentiles (msec): 00:36:09.914 | 1.00th=[ 136], 5.00th=[ 163], 10.00th=[ 174], 20.00th=[ 186], 00:36:09.914 | 30.00th=[ 199], 40.00th=[ 207], 50.00th=[ 213], 60.00th=[ 220], 00:36:09.914 | 70.00th=[ 230], 80.00th=[ 251], 90.00th=[ 262], 95.00th=[ 268], 00:36:09.914 | 99.00th=[ 305], 99.50th=[ 305], 99.90th=[ 309], 99.95th=[ 309], 00:36:09.914 | 99.99th=[ 309] 00:36:09.914 bw ( KiB/s): min= 256, max= 384, per=5.12%, avg=294.40, stdev=55.04, samples=20 00:36:09.914 iops : min= 64, max= 96, avg=73.60, stdev=13.76, samples=20 00:36:09.914 lat (msec) : 250=79.79%, 500=20.21% 00:36:09.914 cpu : usr=98.27%, sys=1.33%, ctx=25, majf=0, minf=60 00:36:09.914 IO depths : 1=3.1%, 2=8.2%, 4=21.8%, 8=57.4%, 16=9.4%, 32=0.0%, >=64=0.0% 00:36:09.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.914 complete : 0=0.0%, 4=93.2%, 8=1.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.914 issued rwts: total=752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.914 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.914 filename0: (groupid=0, jobs=1): err= 0: pid=3712860: Fri Jul 26 23:06:00 2024 00:36:09.914 read: IOPS=50, BW=203KiB/s (207kB/s)(2048KiB/10111msec) 00:36:09.914 slat (nsec): min=6528, max=82239, avg=27185.76, stdev=15092.11 00:36:09.914 clat (msec): min=198, max=453, avg=315.69, stdev=46.89 00:36:09.914 lat (msec): min=198, max=453, avg=315.72, stdev=46.89 00:36:09.914 clat percentiles (msec): 00:36:09.914 | 1.00th=[ 203], 5.00th=[ 209], 10.00th=[ 245], 20.00th=[ 288], 00:36:09.914 | 30.00th=[ 305], 40.00th=[ 313], 50.00th=[ 326], 60.00th=[ 338], 00:36:09.914 | 70.00th=[ 351], 80.00th=[ 355], 90.00th=[ 363], 95.00th=[ 372], 00:36:09.914 | 99.00th=[ 380], 99.50th=[ 397], 99.90th=[ 456], 99.95th=[ 456], 00:36:09.914 | 99.99th=[ 456] 00:36:09.914 bw ( KiB/s): min= 128, max= 368, per=3.45%, avg=198.40, stdev=74.94, samples=20 00:36:09.914 iops : min= 32, max= 92, avg=49.60, stdev=18.73, samples=20 00:36:09.914 lat (msec) : 250=10.55%, 500=89.45% 00:36:09.914 cpu : usr=98.25%, sys=1.35%, ctx=40, majf=0, minf=43 00:36:09.914 IO depths : 1=5.1%, 2=11.3%, 4=25.0%, 8=51.2%, 16=7.4%, 32=0.0%, >=64=0.0% 00:36:09.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.914 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.914 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.914 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.914 filename0: (groupid=0, jobs=1): err= 0: pid=3712861: Fri Jul 26 23:06:00 2024 00:36:09.914 read: IOPS=66, BW=268KiB/s (274kB/s)(2688KiB/10046msec) 00:36:09.914 slat (usec): min=8, max=108, avg=16.85, stdev=13.33 00:36:09.914 clat (msec): min=161, max=352, avg=239.03, stdev=40.28 00:36:09.914 lat (msec): min=161, max=352, avg=239.05, stdev=40.29 00:36:09.914 clat percentiles (msec): 00:36:09.914 | 1.00th=[ 161], 5.00th=[ 186], 10.00th=[ 199], 20.00th=[ 205], 00:36:09.914 | 30.00th=[ 218], 40.00th=[ 222], 50.00th=[ 232], 60.00th=[ 243], 00:36:09.914 | 70.00th=[ 266], 80.00th=[ 271], 90.00th=[ 296], 95.00th=[ 313], 00:36:09.914 | 99.00th=[ 351], 99.50th=[ 351], 99.90th=[ 351], 99.95th=[ 351], 00:36:09.914 | 99.99th=[ 351] 00:36:09.914 bw ( KiB/s): min= 240, max= 384, per=4.56%, avg=262.40, stdev=29.09, samples=20 00:36:09.914 iops : min= 60, max= 96, avg=65.60, stdev= 7.27, samples=20 00:36:09.914 lat (msec) : 250=61.61%, 500=38.39% 00:36:09.914 cpu : usr=98.38%, sys=1.23%, ctx=18, majf=0, minf=47 00:36:09.914 IO depths : 1=4.6%, 2=10.9%, 4=25.0%, 8=51.6%, 16=7.9%, 32=0.0%, >=64=0.0% 00:36:09.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.914 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.914 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.914 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.914 filename0: (groupid=0, jobs=1): err= 0: pid=3712862: Fri Jul 26 23:06:00 2024 00:36:09.914 read: IOPS=66, BW=267KiB/s (273kB/s)(2704KiB/10129msec) 00:36:09.914 slat (usec): min=8, max=177, avg=19.20, stdev=21.81 00:36:09.914 clat (msec): min=162, max=378, avg=238.52, stdev=38.78 00:36:09.914 lat (msec): min=162, max=378, avg=238.54, stdev=38.79 00:36:09.914 clat percentiles (msec): 00:36:09.914 | 1.00th=[ 165], 5.00th=[ 180], 10.00th=[ 197], 20.00th=[ 211], 00:36:09.914 | 30.00th=[ 218], 40.00th=[ 224], 50.00th=[ 241], 60.00th=[ 245], 00:36:09.914 | 70.00th=[ 251], 80.00th=[ 268], 90.00th=[ 288], 95.00th=[ 313], 00:36:09.914 | 99.00th=[ 363], 99.50th=[ 368], 99.90th=[ 380], 99.95th=[ 380], 00:36:09.914 | 99.99th=[ 380] 00:36:09.914 bw ( KiB/s): min= 192, max= 384, per=4.59%, avg=264.00, stdev=41.37, samples=20 00:36:09.914 iops : min= 48, max= 96, avg=66.00, stdev=10.34, samples=20 00:36:09.914 lat (msec) : 250=68.93%, 500=31.07% 00:36:09.914 cpu : usr=97.50%, sys=1.66%, ctx=40, majf=0, minf=44 00:36:09.914 IO depths : 1=2.1%, 2=4.7%, 4=13.9%, 8=68.6%, 16=10.7%, 32=0.0%, >=64=0.0% 00:36:09.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.914 complete : 0=0.0%, 4=90.8%, 8=3.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.914 issued rwts: total=676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.914 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.914 filename0: (groupid=0, jobs=1): err= 0: pid=3712863: Fri Jul 26 23:06:00 2024 00:36:09.914 read: IOPS=50, BW=202KiB/s (206kB/s)(2040KiB/10117msec) 00:36:09.914 slat (usec): min=8, max=111, avg=40.75, stdev=28.39 00:36:09.914 clat (msec): min=128, max=494, avg=316.68, stdev=64.45 00:36:09.914 lat (msec): min=128, max=494, avg=316.72, stdev=64.44 00:36:09.914 clat percentiles (msec): 00:36:09.914 | 1.00th=[ 184], 5.00th=[ 203], 10.00th=[ 203], 20.00th=[ 271], 00:36:09.914 | 30.00th=[ 279], 40.00th=[ 309], 50.00th=[ 334], 60.00th=[ 351], 00:36:09.914 | 70.00th=[ 355], 80.00th=[ 363], 90.00th=[ 376], 95.00th=[ 393], 00:36:09.914 | 99.00th=[ 485], 99.50th=[ 489], 99.90th=[ 493], 99.95th=[ 493], 00:36:09.914 | 99.99th=[ 493] 00:36:09.914 bw ( KiB/s): min= 128, max= 256, per=3.43%, avg=197.60, stdev=60.15, samples=20 00:36:09.914 iops : min= 32, max= 64, avg=49.40, stdev=15.04, samples=20 00:36:09.914 lat (msec) : 250=16.86%, 500=83.14% 00:36:09.914 cpu : usr=97.65%, sys=1.54%, ctx=41, majf=0, minf=29 00:36:09.914 IO depths : 1=3.1%, 2=9.4%, 4=25.1%, 8=53.1%, 16=9.2%, 32=0.0%, >=64=0.0% 00:36:09.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.914 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.914 issued rwts: total=510,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.914 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.914 filename0: (groupid=0, jobs=1): err= 0: pid=3712864: Fri Jul 26 23:06:00 2024 00:36:09.914 read: IOPS=80, BW=322KiB/s (329kB/s)(3264KiB/10147msec) 00:36:09.914 slat (usec): min=4, max=139, avg=15.01, stdev=14.62 00:36:09.914 clat (msec): min=6, max=277, avg=198.82, stdev=59.63 00:36:09.914 lat (msec): min=6, max=277, avg=198.83, stdev=59.62 00:36:09.914 clat percentiles (msec): 00:36:09.914 | 1.00th=[ 7], 5.00th=[ 48], 10.00th=[ 118], 20.00th=[ 180], 00:36:09.915 | 30.00th=[ 188], 40.00th=[ 201], 50.00th=[ 209], 60.00th=[ 218], 00:36:09.915 | 70.00th=[ 226], 80.00th=[ 236], 90.00th=[ 266], 95.00th=[ 271], 00:36:09.915 | 99.00th=[ 279], 99.50th=[ 279], 99.90th=[ 279], 99.95th=[ 279], 00:36:09.915 | 99.99th=[ 279] 00:36:09.915 bw ( KiB/s): min= 256, max= 768, per=5.57%, avg=320.00, stdev=121.08, samples=20 00:36:09.915 iops : min= 64, max= 192, avg=80.00, stdev=30.27, samples=20 00:36:09.915 lat (msec) : 10=1.96%, 20=1.96%, 50=1.72%, 100=2.45%, 250=76.23% 00:36:09.915 lat (msec) : 500=15.69% 00:36:09.915 cpu : usr=98.02%, sys=1.53%, ctx=34, majf=0, minf=48 00:36:09.915 IO depths : 1=5.8%, 2=11.6%, 4=24.1%, 8=51.7%, 16=6.7%, 32=0.0%, >=64=0.0% 00:36:09.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.915 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.915 issued rwts: total=816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.915 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.915 filename0: (groupid=0, jobs=1): err= 0: pid=3712865: Fri Jul 26 23:06:00 2024 00:36:09.915 read: IOPS=50, BW=202KiB/s (207kB/s)(2048KiB/10121msec) 00:36:09.915 slat (usec): min=9, max=106, avg=36.54, stdev=23.14 00:36:09.915 clat (msec): min=161, max=453, avg=315.83, stdev=52.85 00:36:09.915 lat (msec): min=161, max=453, avg=315.86, stdev=52.84 00:36:09.915 clat percentiles (msec): 00:36:09.915 | 1.00th=[ 199], 5.00th=[ 209], 10.00th=[ 222], 20.00th=[ 275], 00:36:09.915 | 30.00th=[ 296], 40.00th=[ 313], 50.00th=[ 330], 60.00th=[ 338], 00:36:09.915 | 70.00th=[ 351], 80.00th=[ 355], 90.00th=[ 363], 95.00th=[ 380], 00:36:09.915 | 99.00th=[ 447], 99.50th=[ 447], 99.90th=[ 456], 99.95th=[ 456], 00:36:09.915 | 99.99th=[ 456] 00:36:09.915 bw ( KiB/s): min= 128, max= 368, per=3.45%, avg=198.40, stdev=73.49, samples=20 00:36:09.915 iops : min= 32, max= 92, avg=49.60, stdev=18.37, samples=20 00:36:09.915 lat (msec) : 250=12.11%, 500=87.89% 00:36:09.915 cpu : usr=97.72%, sys=1.39%, ctx=59, majf=0, minf=32 00:36:09.915 IO depths : 1=3.3%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.2%, 32=0.0%, >=64=0.0% 00:36:09.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.915 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.915 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.915 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.915 filename1: (groupid=0, jobs=1): err= 0: pid=3712866: Fri Jul 26 23:06:00 2024 00:36:09.915 read: IOPS=70, BW=282KiB/s (289kB/s)(2864KiB/10144msec) 00:36:09.915 slat (usec): min=8, max=344, avg=32.72, stdev=35.76 00:36:09.915 clat (msec): min=14, max=339, avg=224.68, stdev=69.04 00:36:09.915 lat (msec): min=14, max=340, avg=224.71, stdev=69.05 00:36:09.915 clat percentiles (msec): 00:36:09.915 | 1.00th=[ 23], 5.00th=[ 45], 10.00th=[ 155], 20.00th=[ 205], 00:36:09.915 | 30.00th=[ 211], 40.00th=[ 218], 50.00th=[ 245], 60.00th=[ 251], 00:36:09.915 | 70.00th=[ 255], 80.00th=[ 268], 90.00th=[ 296], 95.00th=[ 313], 00:36:09.915 | 99.00th=[ 330], 99.50th=[ 334], 99.90th=[ 342], 99.95th=[ 342], 00:36:09.915 | 99.99th=[ 342] 00:36:09.915 bw ( KiB/s): min= 128, max= 688, per=4.87%, avg=280.00, stdev=114.62, samples=20 00:36:09.915 iops : min= 32, max= 172, avg=70.00, stdev=28.65, samples=20 00:36:09.915 lat (msec) : 20=0.98%, 50=5.17%, 100=3.63%, 250=50.28%, 500=39.94% 00:36:09.915 cpu : usr=97.40%, sys=1.74%, ctx=55, majf=0, minf=53 00:36:09.915 IO depths : 1=0.6%, 2=3.4%, 4=14.4%, 8=69.7%, 16=12.0%, 32=0.0%, >=64=0.0% 00:36:09.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.915 complete : 0=0.0%, 4=91.2%, 8=3.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.915 issued rwts: total=716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.915 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.915 filename1: (groupid=0, jobs=1): err= 0: pid=3712867: Fri Jul 26 23:06:00 2024 00:36:09.915 read: IOPS=50, BW=202KiB/s (207kB/s)(2048KiB/10121msec) 00:36:09.915 slat (usec): min=10, max=114, avg=58.54, stdev=26.21 00:36:09.915 clat (msec): min=161, max=453, avg=315.64, stdev=51.84 00:36:09.915 lat (msec): min=161, max=453, avg=315.70, stdev=51.83 00:36:09.915 clat percentiles (msec): 00:36:09.915 | 1.00th=[ 199], 5.00th=[ 209], 10.00th=[ 239], 20.00th=[ 275], 00:36:09.915 | 30.00th=[ 296], 40.00th=[ 313], 50.00th=[ 330], 60.00th=[ 338], 00:36:09.915 | 70.00th=[ 351], 80.00th=[ 355], 90.00th=[ 363], 95.00th=[ 380], 00:36:09.915 | 99.00th=[ 447], 99.50th=[ 447], 99.90th=[ 456], 99.95th=[ 456], 00:36:09.915 | 99.99th=[ 456] 00:36:09.915 bw ( KiB/s): min= 128, max= 368, per=3.45%, avg=198.40, stdev=73.49, samples=20 00:36:09.915 iops : min= 32, max= 92, avg=49.60, stdev=18.37, samples=20 00:36:09.915 lat (msec) : 250=11.33%, 500=88.67% 00:36:09.915 cpu : usr=98.16%, sys=1.32%, ctx=29, majf=0, minf=49 00:36:09.915 IO depths : 1=3.7%, 2=10.0%, 4=25.0%, 8=52.5%, 16=8.8%, 32=0.0%, >=64=0.0% 00:36:09.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.915 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.915 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.915 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.915 filename1: (groupid=0, jobs=1): err= 0: pid=3712868: Fri Jul 26 23:06:00 2024 00:36:09.915 read: IOPS=50, BW=202KiB/s (207kB/s)(2040KiB/10116msec) 00:36:09.915 slat (usec): min=11, max=146, avg=61.65, stdev=27.51 00:36:09.915 clat (msec): min=144, max=494, avg=316.50, stdev=63.21 00:36:09.915 lat (msec): min=144, max=494, avg=316.56, stdev=63.20 00:36:09.915 clat percentiles (msec): 00:36:09.915 | 1.00th=[ 186], 5.00th=[ 203], 10.00th=[ 203], 20.00th=[ 271], 00:36:09.915 | 30.00th=[ 279], 40.00th=[ 309], 50.00th=[ 334], 60.00th=[ 347], 00:36:09.915 | 70.00th=[ 355], 80.00th=[ 363], 90.00th=[ 376], 95.00th=[ 380], 00:36:09.915 | 99.00th=[ 485], 99.50th=[ 489], 99.90th=[ 493], 99.95th=[ 493], 00:36:09.915 | 99.99th=[ 493] 00:36:09.915 bw ( KiB/s): min= 128, max= 256, per=3.43%, avg=197.60, stdev=60.15, samples=20 00:36:09.915 iops : min= 32, max= 64, avg=49.40, stdev=15.04, samples=20 00:36:09.915 lat (msec) : 250=16.86%, 500=83.14% 00:36:09.915 cpu : usr=96.79%, sys=1.94%, ctx=108, majf=0, minf=38 00:36:09.915 IO depths : 1=3.3%, 2=9.6%, 4=25.1%, 8=52.9%, 16=9.0%, 32=0.0%, >=64=0.0% 00:36:09.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.915 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.915 issued rwts: total=510,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.915 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.915 filename1: (groupid=0, jobs=1): err= 0: pid=3712869: Fri Jul 26 23:06:00 2024 00:36:09.915 read: IOPS=50, BW=203KiB/s (207kB/s)(2048KiB/10113msec) 00:36:09.915 slat (usec): min=4, max=200, avg=60.67, stdev=30.51 00:36:09.915 clat (msec): min=181, max=454, avg=315.47, stdev=46.80 00:36:09.915 lat (msec): min=181, max=454, avg=315.53, stdev=46.79 00:36:09.915 clat percentiles (msec): 00:36:09.915 | 1.00th=[ 203], 5.00th=[ 209], 10.00th=[ 259], 20.00th=[ 284], 00:36:09.915 | 30.00th=[ 305], 40.00th=[ 313], 50.00th=[ 330], 60.00th=[ 338], 00:36:09.915 | 70.00th=[ 351], 80.00th=[ 355], 90.00th=[ 363], 95.00th=[ 372], 00:36:09.915 | 99.00th=[ 380], 99.50th=[ 380], 99.90th=[ 456], 99.95th=[ 456], 00:36:09.915 | 99.99th=[ 456] 00:36:09.915 bw ( KiB/s): min= 128, max= 384, per=3.45%, avg=198.40, stdev=77.42, samples=20 00:36:09.915 iops : min= 32, max= 96, avg=49.60, stdev=19.35, samples=20 00:36:09.915 lat (msec) : 250=9.77%, 500=90.23% 00:36:09.915 cpu : usr=96.70%, sys=1.96%, ctx=58, majf=0, minf=31 00:36:09.915 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:36:09.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.915 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.915 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.915 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.915 filename1: (groupid=0, jobs=1): err= 0: pid=3712870: Fri Jul 26 23:06:00 2024 00:36:09.915 read: IOPS=63, BW=253KiB/s (259kB/s)(2560KiB/10129msec) 00:36:09.915 slat (nsec): min=8307, max=80431, avg=17677.80, stdev=15237.91 00:36:09.915 clat (msec): min=174, max=463, avg=251.25, stdev=51.29 00:36:09.915 lat (msec): min=174, max=464, avg=251.27, stdev=51.29 00:36:09.915 clat percentiles (msec): 00:36:09.915 | 1.00th=[ 176], 5.00th=[ 178], 10.00th=[ 199], 20.00th=[ 203], 00:36:09.915 | 30.00th=[ 218], 40.00th=[ 222], 50.00th=[ 245], 60.00th=[ 255], 00:36:09.915 | 70.00th=[ 275], 80.00th=[ 296], 90.00th=[ 326], 95.00th=[ 347], 00:36:09.915 | 99.00th=[ 401], 99.50th=[ 422], 99.90th=[ 464], 99.95th=[ 464], 00:36:09.915 | 99.99th=[ 464] 00:36:09.915 bw ( KiB/s): min= 128, max= 384, per=4.33%, avg=249.60, stdev=74.76, samples=20 00:36:09.915 iops : min= 32, max= 96, avg=62.40, stdev=18.69, samples=20 00:36:09.915 lat (msec) : 250=55.94%, 500=44.06% 00:36:09.915 cpu : usr=98.15%, sys=1.40%, ctx=39, majf=0, minf=58 00:36:09.915 IO depths : 1=3.8%, 2=9.4%, 4=23.1%, 8=55.0%, 16=8.8%, 32=0.0%, >=64=0.0% 00:36:09.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.915 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.915 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.915 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.915 filename1: (groupid=0, jobs=1): err= 0: pid=3712871: Fri Jul 26 23:06:00 2024 00:36:09.915 read: IOPS=50, BW=202KiB/s (207kB/s)(2048KiB/10115msec) 00:36:09.915 slat (nsec): min=8361, max=58142, avg=17665.44, stdev=6614.70 00:36:09.915 clat (msec): min=119, max=480, avg=315.94, stdev=57.36 00:36:09.915 lat (msec): min=119, max=480, avg=315.96, stdev=57.36 00:36:09.915 clat percentiles (msec): 00:36:09.915 | 1.00th=[ 203], 5.00th=[ 203], 10.00th=[ 211], 20.00th=[ 275], 00:36:09.915 | 30.00th=[ 296], 40.00th=[ 309], 50.00th=[ 334], 60.00th=[ 351], 00:36:09.915 | 70.00th=[ 351], 80.00th=[ 363], 90.00th=[ 363], 95.00th=[ 380], 00:36:09.915 | 99.00th=[ 477], 99.50th=[ 481], 99.90th=[ 481], 99.95th=[ 481], 00:36:09.915 | 99.99th=[ 481] 00:36:09.915 bw ( KiB/s): min= 128, max= 256, per=3.45%, avg=198.40, stdev=63.87, samples=20 00:36:09.915 iops : min= 32, max= 64, avg=49.60, stdev=15.97, samples=20 00:36:09.915 lat (msec) : 250=14.84%, 500=85.16% 00:36:09.915 cpu : usr=97.76%, sys=1.56%, ctx=82, majf=0, minf=46 00:36:09.915 IO depths : 1=4.7%, 2=10.9%, 4=25.0%, 8=51.6%, 16=7.8%, 32=0.0%, >=64=0.0% 00:36:09.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.916 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.916 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.916 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.916 filename1: (groupid=0, jobs=1): err= 0: pid=3712872: Fri Jul 26 23:06:00 2024 00:36:09.916 read: IOPS=76, BW=305KiB/s (313kB/s)(3072KiB/10063msec) 00:36:09.916 slat (usec): min=8, max=233, avg=16.09, stdev=17.15 00:36:09.916 clat (msec): min=6, max=358, avg=209.51, stdev=64.44 00:36:09.916 lat (msec): min=6, max=358, avg=209.53, stdev=64.44 00:36:09.916 clat percentiles (msec): 00:36:09.916 | 1.00th=[ 8], 5.00th=[ 49], 10.00th=[ 142], 20.00th=[ 186], 00:36:09.916 | 30.00th=[ 205], 40.00th=[ 211], 50.00th=[ 218], 60.00th=[ 226], 00:36:09.916 | 70.00th=[ 245], 80.00th=[ 257], 90.00th=[ 268], 95.00th=[ 279], 00:36:09.916 | 99.00th=[ 351], 99.50th=[ 351], 99.90th=[ 359], 99.95th=[ 359], 00:36:09.916 | 99.99th=[ 359] 00:36:09.916 bw ( KiB/s): min= 144, max= 768, per=5.22%, avg=300.80, stdev=123.35, samples=20 00:36:09.916 iops : min= 36, max= 192, avg=75.20, stdev=30.84, samples=20 00:36:09.916 lat (msec) : 10=3.91%, 20=0.26%, 50=2.08%, 100=2.08%, 250=67.71% 00:36:09.916 lat (msec) : 500=23.96% 00:36:09.916 cpu : usr=98.34%, sys=1.14%, ctx=25, majf=0, minf=50 00:36:09.916 IO depths : 1=0.9%, 2=7.0%, 4=24.6%, 8=55.9%, 16=11.6%, 32=0.0%, >=64=0.0% 00:36:09.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.916 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.916 issued rwts: total=768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.916 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.916 filename1: (groupid=0, jobs=1): err= 0: pid=3712873: Fri Jul 26 23:06:00 2024 00:36:09.916 read: IOPS=63, BW=253KiB/s (259kB/s)(2560KiB/10129msec) 00:36:09.916 slat (usec): min=8, max=125, avg=27.31, stdev=27.03 00:36:09.916 clat (msec): min=118, max=454, avg=251.18, stdev=45.53 00:36:09.916 lat (msec): min=118, max=454, avg=251.21, stdev=45.54 00:36:09.916 clat percentiles (msec): 00:36:09.916 | 1.00th=[ 188], 5.00th=[ 197], 10.00th=[ 201], 20.00th=[ 215], 00:36:09.916 | 30.00th=[ 220], 40.00th=[ 230], 50.00th=[ 245], 60.00th=[ 262], 00:36:09.916 | 70.00th=[ 275], 80.00th=[ 279], 90.00th=[ 313], 95.00th=[ 342], 00:36:09.916 | 99.00th=[ 380], 99.50th=[ 435], 99.90th=[ 456], 99.95th=[ 456], 00:36:09.916 | 99.99th=[ 456] 00:36:09.916 bw ( KiB/s): min= 128, max= 384, per=4.33%, avg=249.60, stdev=60.85, samples=20 00:36:09.916 iops : min= 32, max= 96, avg=62.40, stdev=15.21, samples=20 00:36:09.916 lat (msec) : 250=53.44%, 500=46.56% 00:36:09.916 cpu : usr=96.95%, sys=1.89%, ctx=110, majf=0, minf=40 00:36:09.916 IO depths : 1=2.2%, 2=8.3%, 4=24.5%, 8=54.7%, 16=10.3%, 32=0.0%, >=64=0.0% 00:36:09.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.916 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.916 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.916 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.916 filename2: (groupid=0, jobs=1): err= 0: pid=3712874: Fri Jul 26 23:06:00 2024 00:36:09.916 read: IOPS=50, BW=203KiB/s (207kB/s)(2048KiB/10113msec) 00:36:09.916 slat (nsec): min=3817, max=53755, avg=23377.76, stdev=9896.85 00:36:09.916 clat (msec): min=155, max=454, avg=315.84, stdev=53.41 00:36:09.916 lat (msec): min=155, max=454, avg=315.86, stdev=53.41 00:36:09.916 clat percentiles (msec): 00:36:09.916 | 1.00th=[ 190], 5.00th=[ 209], 10.00th=[ 222], 20.00th=[ 275], 00:36:09.916 | 30.00th=[ 296], 40.00th=[ 313], 50.00th=[ 330], 60.00th=[ 338], 00:36:09.916 | 70.00th=[ 351], 80.00th=[ 355], 90.00th=[ 363], 95.00th=[ 380], 00:36:09.916 | 99.00th=[ 447], 99.50th=[ 447], 99.90th=[ 456], 99.95th=[ 456], 00:36:09.916 | 99.99th=[ 456] 00:36:09.916 bw ( KiB/s): min= 128, max= 384, per=3.45%, avg=198.40, stdev=76.19, samples=20 00:36:09.916 iops : min= 32, max= 96, avg=49.60, stdev=19.05, samples=20 00:36:09.916 lat (msec) : 250=11.72%, 500=88.28% 00:36:09.916 cpu : usr=98.27%, sys=1.33%, ctx=23, majf=0, minf=42 00:36:09.916 IO depths : 1=3.1%, 2=9.4%, 4=25.0%, 8=53.1%, 16=9.4%, 32=0.0%, >=64=0.0% 00:36:09.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.916 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.916 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.916 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.916 filename2: (groupid=0, jobs=1): err= 0: pid=3712875: Fri Jul 26 23:06:00 2024 00:36:09.916 read: IOPS=51, BW=208KiB/s (213kB/s)(2104KiB/10136msec) 00:36:09.916 slat (usec): min=8, max=139, avg=31.35, stdev=28.13 00:36:09.916 clat (msec): min=192, max=382, avg=307.72, stdev=55.68 00:36:09.916 lat (msec): min=192, max=382, avg=307.75, stdev=55.67 00:36:09.916 clat percentiles (msec): 00:36:09.916 | 1.00th=[ 192], 5.00th=[ 201], 10.00th=[ 203], 20.00th=[ 271], 00:36:09.916 | 30.00th=[ 279], 40.00th=[ 300], 50.00th=[ 330], 60.00th=[ 347], 00:36:09.916 | 70.00th=[ 351], 80.00th=[ 355], 90.00th=[ 363], 95.00th=[ 363], 00:36:09.916 | 99.00th=[ 380], 99.50th=[ 380], 99.90th=[ 384], 99.95th=[ 384], 00:36:09.916 | 99.99th=[ 384] 00:36:09.916 bw ( KiB/s): min= 128, max= 256, per=3.53%, avg=204.00, stdev=54.17, samples=20 00:36:09.916 iops : min= 32, max= 64, avg=51.00, stdev=13.54, samples=20 00:36:09.916 lat (msec) : 250=17.49%, 500=82.51% 00:36:09.916 cpu : usr=97.75%, sys=1.45%, ctx=21, majf=0, minf=40 00:36:09.916 IO depths : 1=0.6%, 2=6.8%, 4=25.1%, 8=55.7%, 16=11.8%, 32=0.0%, >=64=0.0% 00:36:09.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.916 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.916 issued rwts: total=526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.916 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.916 filename2: (groupid=0, jobs=1): err= 0: pid=3712876: Fri Jul 26 23:06:00 2024 00:36:09.916 read: IOPS=50, BW=202KiB/s (207kB/s)(2048KiB/10118msec) 00:36:09.916 slat (usec): min=7, max=217, avg=31.06, stdev=28.15 00:36:09.916 clat (msec): min=185, max=454, avg=315.86, stdev=53.65 00:36:09.916 lat (msec): min=185, max=454, avg=315.89, stdev=53.65 00:36:09.916 clat percentiles (msec): 00:36:09.916 | 1.00th=[ 203], 5.00th=[ 203], 10.00th=[ 203], 20.00th=[ 275], 00:36:09.916 | 30.00th=[ 296], 40.00th=[ 309], 50.00th=[ 338], 60.00th=[ 351], 00:36:09.916 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 363], 95.00th=[ 376], 00:36:09.916 | 99.00th=[ 380], 99.50th=[ 435], 99.90th=[ 456], 99.95th=[ 456], 00:36:09.916 | 99.99th=[ 456] 00:36:09.916 bw ( KiB/s): min= 128, max= 256, per=3.45%, avg=198.40, stdev=63.87, samples=20 00:36:09.916 iops : min= 32, max= 64, avg=49.60, stdev=15.97, samples=20 00:36:09.916 lat (msec) : 250=13.67%, 500=86.33% 00:36:09.916 cpu : usr=96.39%, sys=2.09%, ctx=45, majf=0, minf=35 00:36:09.916 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:36:09.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.916 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.916 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.916 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.916 filename2: (groupid=0, jobs=1): err= 0: pid=3712877: Fri Jul 26 23:06:00 2024 00:36:09.916 read: IOPS=75, BW=301KiB/s (308kB/s)(3056KiB/10147msec) 00:36:09.916 slat (usec): min=4, max=109, avg=29.06, stdev=27.33 00:36:09.916 clat (msec): min=3, max=396, avg=210.69, stdev=68.31 00:36:09.916 lat (msec): min=3, max=396, avg=210.72, stdev=68.32 00:36:09.916 clat percentiles (msec): 00:36:09.916 | 1.00th=[ 7], 5.00th=[ 48], 10.00th=[ 142], 20.00th=[ 184], 00:36:09.916 | 30.00th=[ 203], 40.00th=[ 211], 50.00th=[ 218], 60.00th=[ 234], 00:36:09.916 | 70.00th=[ 247], 80.00th=[ 255], 90.00th=[ 271], 95.00th=[ 305], 00:36:09.916 | 99.00th=[ 330], 99.50th=[ 351], 99.90th=[ 397], 99.95th=[ 397], 00:36:09.916 | 99.99th=[ 397] 00:36:09.916 bw ( KiB/s): min= 128, max= 768, per=5.20%, avg=299.20, stdev=124.92, samples=20 00:36:09.916 iops : min= 32, max= 192, avg=74.80, stdev=31.23, samples=20 00:36:09.916 lat (msec) : 4=0.26%, 10=1.83%, 20=2.09%, 50=2.09%, 100=2.09% 00:36:09.916 lat (msec) : 250=64.92%, 500=26.70% 00:36:09.916 cpu : usr=97.99%, sys=1.36%, ctx=39, majf=0, minf=37 00:36:09.916 IO depths : 1=1.3%, 2=4.6%, 4=16.0%, 8=66.9%, 16=11.3%, 32=0.0%, >=64=0.0% 00:36:09.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.916 complete : 0=0.0%, 4=91.5%, 8=2.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.916 issued rwts: total=764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.916 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.916 filename2: (groupid=0, jobs=1): err= 0: pid=3712878: Fri Jul 26 23:06:00 2024 00:36:09.916 read: IOPS=52, BW=208KiB/s (213kB/s)(2112KiB/10136msec) 00:36:09.916 slat (usec): min=7, max=108, avg=57.80, stdev=28.47 00:36:09.916 clat (msec): min=119, max=496, avg=306.65, stdev=68.13 00:36:09.916 lat (msec): min=119, max=496, avg=306.71, stdev=68.13 00:36:09.916 clat percentiles (msec): 00:36:09.916 | 1.00th=[ 186], 5.00th=[ 199], 10.00th=[ 203], 20.00th=[ 236], 00:36:09.916 | 30.00th=[ 275], 40.00th=[ 296], 50.00th=[ 330], 60.00th=[ 347], 00:36:09.916 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 363], 95.00th=[ 380], 00:36:09.916 | 99.00th=[ 489], 99.50th=[ 489], 99.90th=[ 498], 99.95th=[ 498], 00:36:09.916 | 99.99th=[ 498] 00:36:09.916 bw ( KiB/s): min= 128, max= 256, per=3.55%, avg=204.75, stdev=59.73, samples=20 00:36:09.916 iops : min= 32, max= 64, avg=51.15, stdev=14.90, samples=20 00:36:09.916 lat (msec) : 250=22.73%, 500=77.27% 00:36:09.916 cpu : usr=98.01%, sys=1.44%, ctx=11, majf=0, minf=49 00:36:09.916 IO depths : 1=3.4%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.1%, 32=0.0%, >=64=0.0% 00:36:09.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.916 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.916 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.916 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.916 filename2: (groupid=0, jobs=1): err= 0: pid=3712879: Fri Jul 26 23:06:00 2024 00:36:09.916 read: IOPS=72, BW=288KiB/s (295kB/s)(2920KiB/10135msec) 00:36:09.916 slat (usec): min=8, max=160, avg=14.22, stdev=13.86 00:36:09.916 clat (msec): min=145, max=364, avg=221.98, stdev=39.23 00:36:09.916 lat (msec): min=145, max=364, avg=221.99, stdev=39.23 00:36:09.916 clat percentiles (msec): 00:36:09.917 | 1.00th=[ 148], 5.00th=[ 174], 10.00th=[ 184], 20.00th=[ 192], 00:36:09.917 | 30.00th=[ 199], 40.00th=[ 209], 50.00th=[ 218], 60.00th=[ 226], 00:36:09.917 | 70.00th=[ 239], 80.00th=[ 251], 90.00th=[ 262], 95.00th=[ 288], 00:36:09.917 | 99.00th=[ 359], 99.50th=[ 363], 99.90th=[ 363], 99.95th=[ 363], 00:36:09.917 | 99.99th=[ 363] 00:36:09.917 bw ( KiB/s): min= 224, max= 368, per=4.96%, avg=285.60, stdev=42.58, samples=20 00:36:09.917 iops : min= 56, max= 92, avg=71.40, stdev=10.64, samples=20 00:36:09.917 lat (msec) : 250=79.45%, 500=20.55% 00:36:09.917 cpu : usr=97.07%, sys=1.87%, ctx=80, majf=0, minf=136 00:36:09.917 IO depths : 1=0.4%, 2=2.2%, 4=11.2%, 8=73.8%, 16=12.3%, 32=0.0%, >=64=0.0% 00:36:09.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.917 complete : 0=0.0%, 4=90.1%, 8=4.6%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.917 issued rwts: total=730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.917 filename2: (groupid=0, jobs=1): err= 0: pid=3712880: Fri Jul 26 23:06:00 2024 00:36:09.917 read: IOPS=50, BW=202KiB/s (207kB/s)(2048KiB/10115msec) 00:36:09.917 slat (nsec): min=8799, max=79360, avg=20524.15, stdev=12096.54 00:36:09.917 clat (msec): min=200, max=381, avg=315.89, stdev=50.79 00:36:09.917 lat (msec): min=200, max=381, avg=315.91, stdev=50.78 00:36:09.917 clat percentiles (msec): 00:36:09.917 | 1.00th=[ 201], 5.00th=[ 203], 10.00th=[ 243], 20.00th=[ 275], 00:36:09.917 | 30.00th=[ 296], 40.00th=[ 309], 50.00th=[ 334], 60.00th=[ 351], 00:36:09.917 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 363], 95.00th=[ 372], 00:36:09.917 | 99.00th=[ 380], 99.50th=[ 380], 99.90th=[ 380], 99.95th=[ 380], 00:36:09.917 | 99.99th=[ 380] 00:36:09.917 bw ( KiB/s): min= 128, max= 256, per=3.45%, avg=198.40, stdev=65.33, samples=20 00:36:09.917 iops : min= 32, max= 64, avg=49.60, stdev=16.33, samples=20 00:36:09.917 lat (msec) : 250=12.50%, 500=87.50% 00:36:09.917 cpu : usr=97.18%, sys=1.79%, ctx=77, majf=0, minf=43 00:36:09.917 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:09.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.917 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.917 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.917 filename2: (groupid=0, jobs=1): err= 0: pid=3712881: Fri Jul 26 23:06:00 2024 00:36:09.917 read: IOPS=72, BW=291KiB/s (298kB/s)(2952KiB/10143msec) 00:36:09.917 slat (usec): min=4, max=729, avg=32.22, stdev=40.52 00:36:09.917 clat (msec): min=13, max=380, avg=218.10, stdev=60.16 00:36:09.917 lat (msec): min=13, max=380, avg=218.13, stdev=60.16 00:36:09.917 clat percentiles (msec): 00:36:09.917 | 1.00th=[ 27], 5.00th=[ 55], 10.00th=[ 159], 20.00th=[ 199], 00:36:09.917 | 30.00th=[ 213], 40.00th=[ 218], 50.00th=[ 224], 60.00th=[ 232], 00:36:09.917 | 70.00th=[ 249], 80.00th=[ 266], 90.00th=[ 275], 95.00th=[ 279], 00:36:09.917 | 99.00th=[ 338], 99.50th=[ 342], 99.90th=[ 380], 99.95th=[ 380], 00:36:09.917 | 99.99th=[ 380] 00:36:09.917 bw ( KiB/s): min= 192, max= 689, per=5.01%, avg=288.85, stdev=103.04, samples=20 00:36:09.917 iops : min= 48, max= 172, avg=72.20, stdev=25.71, samples=20 00:36:09.917 lat (msec) : 20=0.95%, 50=3.39%, 100=2.98%, 250=63.69%, 500=29.00% 00:36:09.917 cpu : usr=97.41%, sys=1.57%, ctx=108, majf=0, minf=41 00:36:09.917 IO depths : 1=1.4%, 2=5.8%, 4=19.0%, 8=62.5%, 16=11.4%, 32=0.0%, >=64=0.0% 00:36:09.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.917 complete : 0=0.0%, 4=92.8%, 8=1.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.917 issued rwts: total=738,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:09.917 00:36:09.917 Run status group 0 (all jobs): 00:36:09.917 READ: bw=5747KiB/s (5885kB/s), 198KiB/s-322KiB/s (203kB/s-329kB/s), io=56.9MiB (59.7MB), run=10027-10147msec 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.917 bdev_null0 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.917 [2024-07-26 23:06:00.979827] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.917 bdev_null1 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.917 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.918 23:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:09.918 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.918 23:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:09.918 { 00:36:09.918 "params": { 00:36:09.918 "name": "Nvme$subsystem", 00:36:09.918 "trtype": "$TEST_TRANSPORT", 00:36:09.918 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:09.918 "adrfam": "ipv4", 00:36:09.918 "trsvcid": "$NVMF_PORT", 00:36:09.918 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:09.918 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:09.918 "hdgst": ${hdgst:-false}, 00:36:09.918 "ddgst": ${ddgst:-false} 00:36:09.918 }, 00:36:09.918 "method": "bdev_nvme_attach_controller" 00:36:09.918 } 00:36:09.918 EOF 00:36:09.918 )") 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:09.918 { 00:36:09.918 "params": { 00:36:09.918 "name": "Nvme$subsystem", 00:36:09.918 "trtype": "$TEST_TRANSPORT", 00:36:09.918 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:09.918 "adrfam": "ipv4", 00:36:09.918 "trsvcid": "$NVMF_PORT", 00:36:09.918 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:09.918 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:09.918 "hdgst": ${hdgst:-false}, 00:36:09.918 "ddgst": ${ddgst:-false} 00:36:09.918 }, 00:36:09.918 "method": "bdev_nvme_attach_controller" 00:36:09.918 } 00:36:09.918 EOF 00:36:09.918 )") 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:09.918 "params": { 00:36:09.918 "name": "Nvme0", 00:36:09.918 "trtype": "tcp", 00:36:09.918 "traddr": "10.0.0.2", 00:36:09.918 "adrfam": "ipv4", 00:36:09.918 "trsvcid": "4420", 00:36:09.918 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:09.918 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:09.918 "hdgst": false, 00:36:09.918 "ddgst": false 00:36:09.918 }, 00:36:09.918 "method": "bdev_nvme_attach_controller" 00:36:09.918 },{ 00:36:09.918 "params": { 00:36:09.918 "name": "Nvme1", 00:36:09.918 "trtype": "tcp", 00:36:09.918 "traddr": "10.0.0.2", 00:36:09.918 "adrfam": "ipv4", 00:36:09.918 "trsvcid": "4420", 00:36:09.918 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:09.918 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:09.918 "hdgst": false, 00:36:09.918 "ddgst": false 00:36:09.918 }, 00:36:09.918 "method": "bdev_nvme_attach_controller" 00:36:09.918 }' 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:09.918 23:06:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:09.918 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:09.918 ... 00:36:09.918 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:09.918 ... 00:36:09.918 fio-3.35 00:36:09.918 Starting 4 threads 00:36:09.918 EAL: No free 2048 kB hugepages reported on node 1 00:36:15.177 00:36:15.177 filename0: (groupid=0, jobs=1): err= 0: pid=3714365: Fri Jul 26 23:06:07 2024 00:36:15.177 read: IOPS=1899, BW=14.8MiB/s (15.6MB/s)(74.2MiB/5002msec) 00:36:15.177 slat (nsec): min=4153, max=55335, avg=13069.60, stdev=6460.90 00:36:15.177 clat (usec): min=1340, max=7518, avg=4170.14, stdev=719.96 00:36:15.177 lat (usec): min=1348, max=7532, avg=4183.21, stdev=719.21 00:36:15.177 clat percentiles (usec): 00:36:15.177 | 1.00th=[ 2999], 5.00th=[ 3392], 10.00th=[ 3556], 20.00th=[ 3720], 00:36:15.177 | 30.00th=[ 3818], 40.00th=[ 3916], 50.00th=[ 3982], 60.00th=[ 4080], 00:36:15.177 | 70.00th=[ 4178], 80.00th=[ 4424], 90.00th=[ 5538], 95.00th=[ 5866], 00:36:15.177 | 99.00th=[ 6325], 99.50th=[ 6652], 99.90th=[ 7046], 99.95th=[ 7242], 00:36:15.177 | 99.99th=[ 7504] 00:36:15.177 bw ( KiB/s): min=14768, max=15727, per=24.63%, avg=15193.50, stdev=282.63, samples=10 00:36:15.177 iops : min= 1846, max= 1965, avg=1899.10, stdev=35.15, samples=10 00:36:15.177 lat (msec) : 2=0.09%, 4=50.36%, 10=49.55% 00:36:15.177 cpu : usr=95.40%, sys=3.56%, ctx=9, majf=0, minf=9 00:36:15.177 IO depths : 1=0.1%, 2=3.3%, 4=69.0%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:15.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.177 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.177 issued rwts: total=9502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.177 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:15.177 filename0: (groupid=0, jobs=1): err= 0: pid=3714366: Fri Jul 26 23:06:07 2024 00:36:15.177 read: IOPS=1942, BW=15.2MiB/s (15.9MB/s)(75.9MiB/5003msec) 00:36:15.177 slat (nsec): min=4022, max=58632, avg=12903.35, stdev=6096.77 00:36:15.177 clat (usec): min=903, max=7304, avg=4079.53, stdev=658.67 00:36:15.177 lat (usec): min=922, max=7340, avg=4092.44, stdev=658.82 00:36:15.177 clat percentiles (usec): 00:36:15.177 | 1.00th=[ 2900], 5.00th=[ 3294], 10.00th=[ 3490], 20.00th=[ 3654], 00:36:15.177 | 30.00th=[ 3752], 40.00th=[ 3851], 50.00th=[ 3982], 60.00th=[ 4080], 00:36:15.177 | 70.00th=[ 4146], 80.00th=[ 4293], 90.00th=[ 5014], 95.00th=[ 5604], 00:36:15.177 | 99.00th=[ 6259], 99.50th=[ 6456], 99.90th=[ 7046], 99.95th=[ 7177], 00:36:15.177 | 99.99th=[ 7308] 00:36:15.177 bw ( KiB/s): min=15024, max=15968, per=25.20%, avg=15539.10, stdev=382.39, samples=10 00:36:15.177 iops : min= 1878, max= 1996, avg=1942.30, stdev=47.70, samples=10 00:36:15.177 lat (usec) : 1000=0.01% 00:36:15.177 lat (msec) : 2=0.03%, 4=51.79%, 10=48.17% 00:36:15.177 cpu : usr=96.02%, sys=3.52%, ctx=9, majf=0, minf=0 00:36:15.177 IO depths : 1=0.1%, 2=2.8%, 4=69.2%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:15.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.177 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.177 issued rwts: total=9718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.177 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:15.177 filename1: (groupid=0, jobs=1): err= 0: pid=3714367: Fri Jul 26 23:06:07 2024 00:36:15.177 read: IOPS=1949, BW=15.2MiB/s (16.0MB/s)(76.2MiB/5004msec) 00:36:15.178 slat (nsec): min=3856, max=57970, avg=13442.69, stdev=6326.53 00:36:15.178 clat (usec): min=1445, max=7262, avg=4063.02, stdev=613.70 00:36:15.178 lat (usec): min=1453, max=7285, avg=4076.46, stdev=613.62 00:36:15.178 clat percentiles (usec): 00:36:15.178 | 1.00th=[ 2933], 5.00th=[ 3359], 10.00th=[ 3523], 20.00th=[ 3687], 00:36:15.178 | 30.00th=[ 3785], 40.00th=[ 3884], 50.00th=[ 3982], 60.00th=[ 4047], 00:36:15.178 | 70.00th=[ 4146], 80.00th=[ 4293], 90.00th=[ 4752], 95.00th=[ 5604], 00:36:15.178 | 99.00th=[ 6194], 99.50th=[ 6456], 99.90th=[ 6915], 99.95th=[ 7177], 00:36:15.178 | 99.99th=[ 7242] 00:36:15.178 bw ( KiB/s): min=15184, max=16144, per=25.29%, avg=15598.40, stdev=298.23, samples=10 00:36:15.178 iops : min= 1898, max= 2018, avg=1949.80, stdev=37.28, samples=10 00:36:15.178 lat (msec) : 2=0.11%, 4=53.26%, 10=46.62% 00:36:15.178 cpu : usr=95.42%, sys=4.00%, ctx=46, majf=0, minf=9 00:36:15.178 IO depths : 1=0.2%, 2=3.4%, 4=66.1%, 8=30.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:15.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.178 complete : 0=0.0%, 4=94.8%, 8=5.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.178 issued rwts: total=9757,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.178 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:15.178 filename1: (groupid=0, jobs=1): err= 0: pid=3714368: Fri Jul 26 23:06:07 2024 00:36:15.178 read: IOPS=1919, BW=15.0MiB/s (15.7MB/s)(75.0MiB/5002msec) 00:36:15.178 slat (usec): min=3, max=100, avg=14.95, stdev= 7.64 00:36:15.178 clat (usec): min=949, max=45817, avg=4124.38, stdev=1380.66 00:36:15.178 lat (usec): min=968, max=45829, avg=4139.33, stdev=1380.14 00:36:15.178 clat percentiles (usec): 00:36:15.178 | 1.00th=[ 2671], 5.00th=[ 3228], 10.00th=[ 3458], 20.00th=[ 3687], 00:36:15.178 | 30.00th=[ 3785], 40.00th=[ 3884], 50.00th=[ 3982], 60.00th=[ 4080], 00:36:15.178 | 70.00th=[ 4178], 80.00th=[ 4424], 90.00th=[ 4948], 95.00th=[ 5604], 00:36:15.178 | 99.00th=[ 6325], 99.50th=[ 6456], 99.90th=[ 8225], 99.95th=[45876], 00:36:15.178 | 99.99th=[45876] 00:36:15.178 bw ( KiB/s): min=14368, max=15792, per=24.88%, avg=15347.20, stdev=480.52, samples=10 00:36:15.178 iops : min= 1796, max= 1974, avg=1918.40, stdev=60.07, samples=10 00:36:15.178 lat (usec) : 1000=0.01% 00:36:15.178 lat (msec) : 2=0.14%, 4=51.24%, 10=48.53%, 50=0.08% 00:36:15.178 cpu : usr=93.96%, sys=5.12%, ctx=28, majf=0, minf=9 00:36:15.178 IO depths : 1=0.1%, 2=2.6%, 4=68.2%, 8=29.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:15.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.178 complete : 0=0.0%, 4=93.9%, 8=6.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.178 issued rwts: total=9600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.178 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:15.178 00:36:15.178 Run status group 0 (all jobs): 00:36:15.178 READ: bw=60.2MiB/s (63.2MB/s), 14.8MiB/s-15.2MiB/s (15.6MB/s-16.0MB/s), io=301MiB (316MB), run=5002-5004msec 00:36:15.178 23:06:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:15.178 23:06:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:15.178 23:06:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:15.178 23:06:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:15.178 23:06:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:15.178 23:06:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:15.178 23:06:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.178 23:06:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.178 23:06:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.178 23:06:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:15.178 23:06:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.178 23:06:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.178 23:06:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.178 23:06:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:15.178 23:06:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:15.178 23:06:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:15.178 23:06:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:15.178 23:06:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.178 23:06:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.178 23:06:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.178 23:06:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:15.178 23:06:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.178 23:06:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.178 23:06:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.178 00:36:15.178 real 0m23.996s 00:36:15.178 user 4m34.232s 00:36:15.178 sys 0m6.544s 00:36:15.178 23:06:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:15.178 23:06:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.178 ************************************ 00:36:15.178 END TEST fio_dif_rand_params 00:36:15.178 ************************************ 00:36:15.178 23:06:07 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:15.178 23:06:07 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:15.178 23:06:07 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:15.178 23:06:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:15.178 ************************************ 00:36:15.178 START TEST fio_dif_digest 00:36:15.178 ************************************ 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:15.178 bdev_null0 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:15.178 [2024-07-26 23:06:07.324545] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:15.178 { 00:36:15.178 "params": { 00:36:15.178 "name": "Nvme$subsystem", 00:36:15.178 "trtype": "$TEST_TRANSPORT", 00:36:15.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:15.178 "adrfam": "ipv4", 00:36:15.178 "trsvcid": "$NVMF_PORT", 00:36:15.178 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:15.178 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:15.178 "hdgst": ${hdgst:-false}, 00:36:15.178 "ddgst": ${ddgst:-false} 00:36:15.178 }, 00:36:15.178 "method": "bdev_nvme_attach_controller" 00:36:15.178 } 00:36:15.178 EOF 00:36:15.178 )") 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:15.178 23:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:15.179 23:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:36:15.179 23:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:15.179 23:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:15.179 23:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:36:15.179 23:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:15.179 23:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:36:15.179 23:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:36:15.179 23:06:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:36:15.179 23:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:15.179 23:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:15.179 23:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:15.179 23:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:15.179 23:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:36:15.179 23:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:15.179 23:06:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:36:15.179 23:06:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:36:15.179 23:06:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:15.179 "params": { 00:36:15.179 "name": "Nvme0", 00:36:15.179 "trtype": "tcp", 00:36:15.179 "traddr": "10.0.0.2", 00:36:15.179 "adrfam": "ipv4", 00:36:15.179 "trsvcid": "4420", 00:36:15.179 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:15.179 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:15.179 "hdgst": true, 00:36:15.179 "ddgst": true 00:36:15.179 }, 00:36:15.179 "method": "bdev_nvme_attach_controller" 00:36:15.179 }' 00:36:15.179 23:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:15.179 23:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:15.179 23:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:15.179 23:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:15.179 23:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:36:15.179 23:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:15.179 23:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:15.179 23:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:15.179 23:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:15.179 23:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:15.179 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:15.179 ... 00:36:15.179 fio-3.35 00:36:15.179 Starting 3 threads 00:36:15.179 EAL: No free 2048 kB hugepages reported on node 1 00:36:27.383 00:36:27.383 filename0: (groupid=0, jobs=1): err= 0: pid=3715362: Fri Jul 26 23:06:18 2024 00:36:27.384 read: IOPS=159, BW=19.9MiB/s (20.8MB/s)(200MiB/10048msec) 00:36:27.384 slat (nsec): min=4558, max=66985, avg=14907.92, stdev=2843.05 00:36:27.384 clat (usec): min=10245, max=61435, avg=18843.61, stdev=8100.59 00:36:27.384 lat (usec): min=10271, max=61449, avg=18858.52, stdev=8100.57 00:36:27.384 clat percentiles (usec): 00:36:27.384 | 1.00th=[11600], 5.00th=[14877], 10.00th=[15664], 20.00th=[16188], 00:36:27.384 | 30.00th=[16712], 40.00th=[16909], 50.00th=[17433], 60.00th=[17695], 00:36:27.384 | 70.00th=[17957], 80.00th=[18482], 90.00th=[19268], 95.00th=[20579], 00:36:27.384 | 99.00th=[59507], 99.50th=[60556], 99.90th=[61080], 99.95th=[61604], 00:36:27.384 | 99.99th=[61604] 00:36:27.384 bw ( KiB/s): min=18432, max=22272, per=28.38%, avg=20418.15, stdev=1253.04, samples=20 00:36:27.384 iops : min= 144, max= 174, avg=159.50, stdev= 9.77, samples=20 00:36:27.384 lat (msec) : 20=93.74%, 50=2.32%, 100=3.94% 00:36:27.384 cpu : usr=90.48%, sys=7.63%, ctx=324, majf=0, minf=152 00:36:27.384 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:27.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.384 issued rwts: total=1598,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:27.384 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:27.384 filename0: (groupid=0, jobs=1): err= 0: pid=3715363: Fri Jul 26 23:06:18 2024 00:36:27.384 read: IOPS=196, BW=24.6MiB/s (25.8MB/s)(246MiB/10006msec) 00:36:27.384 slat (nsec): min=4370, max=41885, avg=14341.44, stdev=2190.41 00:36:27.384 clat (usec): min=8113, max=60256, avg=15241.98, stdev=2905.06 00:36:27.384 lat (usec): min=8143, max=60270, avg=15256.32, stdev=2905.04 00:36:27.384 clat percentiles (usec): 00:36:27.384 | 1.00th=[10683], 5.00th=[11600], 10.00th=[12780], 20.00th=[14091], 00:36:27.384 | 30.00th=[14615], 40.00th=[15008], 50.00th=[15270], 60.00th=[15664], 00:36:27.384 | 70.00th=[15926], 80.00th=[16319], 90.00th=[16909], 95.00th=[17433], 00:36:27.384 | 99.00th=[18482], 99.50th=[19792], 99.90th=[60031], 99.95th=[60031], 00:36:27.384 | 99.99th=[60031] 00:36:27.384 bw ( KiB/s): min=23552, max=26880, per=34.96%, avg=25152.00, stdev=866.65, samples=20 00:36:27.384 iops : min= 184, max= 210, avg=196.50, stdev= 6.77, samples=20 00:36:27.384 lat (msec) : 10=0.10%, 20=99.44%, 50=0.15%, 100=0.31% 00:36:27.384 cpu : usr=92.33%, sys=6.89%, ctx=122, majf=0, minf=149 00:36:27.384 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:27.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.384 issued rwts: total=1967,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:27.384 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:27.384 filename0: (groupid=0, jobs=1): err= 0: pid=3715364: Fri Jul 26 23:06:18 2024 00:36:27.384 read: IOPS=207, BW=25.9MiB/s (27.2MB/s)(260MiB/10045msec) 00:36:27.384 slat (nsec): min=4350, max=43309, avg=19276.68, stdev=3096.41 00:36:27.384 clat (usec): min=9268, max=56330, avg=14429.18, stdev=2811.57 00:36:27.384 lat (usec): min=9287, max=56346, avg=14448.46, stdev=2811.56 00:36:27.384 clat percentiles (usec): 00:36:27.384 | 1.00th=[10159], 5.00th=[11207], 10.00th=[11994], 20.00th=[13435], 00:36:27.384 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14484], 60.00th=[14746], 00:36:27.384 | 70.00th=[15008], 80.00th=[15401], 90.00th=[15795], 95.00th=[16188], 00:36:27.384 | 99.00th=[16909], 99.50th=[21365], 99.90th=[54264], 99.95th=[54789], 00:36:27.384 | 99.99th=[56361] 00:36:27.384 bw ( KiB/s): min=25088, max=28160, per=37.01%, avg=26624.00, stdev=787.95, samples=20 00:36:27.384 iops : min= 196, max= 220, avg=208.00, stdev= 6.16, samples=20 00:36:27.384 lat (msec) : 10=0.82%, 20=98.66%, 50=0.14%, 100=0.38% 00:36:27.384 cpu : usr=90.61%, sys=8.72%, ctx=22, majf=0, minf=95 00:36:27.384 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:27.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.384 issued rwts: total=2082,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:27.384 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:27.384 00:36:27.384 Run status group 0 (all jobs): 00:36:27.384 READ: bw=70.2MiB/s (73.7MB/s), 19.9MiB/s-25.9MiB/s (20.8MB/s-27.2MB/s), io=706MiB (740MB), run=10006-10048msec 00:36:27.384 23:06:18 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:27.384 23:06:18 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:27.384 23:06:18 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:27.384 23:06:18 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:27.384 23:06:18 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:27.384 23:06:18 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:27.384 23:06:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.384 23:06:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:27.384 23:06:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.384 23:06:18 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:27.384 23:06:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.384 23:06:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:27.384 23:06:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.384 00:36:27.384 real 0m11.087s 00:36:27.384 user 0m28.500s 00:36:27.384 sys 0m2.592s 00:36:27.384 23:06:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:27.384 23:06:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:27.384 ************************************ 00:36:27.384 END TEST fio_dif_digest 00:36:27.384 ************************************ 00:36:27.384 23:06:18 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:27.384 23:06:18 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:27.384 23:06:18 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:27.384 23:06:18 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:36:27.384 23:06:18 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:27.384 23:06:18 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:36:27.384 23:06:18 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:27.384 23:06:18 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:27.384 rmmod nvme_tcp 00:36:27.384 rmmod nvme_fabrics 00:36:27.384 rmmod nvme_keyring 00:36:27.384 23:06:18 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:27.384 23:06:18 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:36:27.384 23:06:18 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:36:27.384 23:06:18 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3708978 ']' 00:36:27.384 23:06:18 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3708978 00:36:27.384 23:06:18 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 3708978 ']' 00:36:27.384 23:06:18 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 3708978 00:36:27.384 23:06:18 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:36:27.384 23:06:18 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:27.384 23:06:18 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3708978 00:36:27.384 23:06:18 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:27.384 23:06:18 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:27.384 23:06:18 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3708978' 00:36:27.384 killing process with pid 3708978 00:36:27.384 23:06:18 nvmf_dif -- common/autotest_common.sh@965 -- # kill 3708978 00:36:27.384 23:06:18 nvmf_dif -- common/autotest_common.sh@970 -- # wait 3708978 00:36:27.384 23:06:18 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:27.384 23:06:18 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:27.384 Waiting for block devices as requested 00:36:27.384 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:27.384 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:27.644 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:27.644 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:27.644 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:27.904 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:27.904 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:27.904 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:27.904 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:28.163 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:28.163 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:28.163 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:28.163 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:28.423 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:28.423 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:28.423 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:28.423 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:28.682 23:06:21 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:28.682 23:06:21 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:28.682 23:06:21 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:28.682 23:06:21 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:28.682 23:06:21 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:28.682 23:06:21 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:28.682 23:06:21 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:30.590 23:06:23 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:30.590 00:36:30.590 real 1m6.032s 00:36:30.590 user 6m29.992s 00:36:30.590 sys 0m18.070s 00:36:30.590 23:06:23 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:30.590 23:06:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:30.590 ************************************ 00:36:30.590 END TEST nvmf_dif 00:36:30.590 ************************************ 00:36:30.590 23:06:23 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:30.590 23:06:23 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:30.590 23:06:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:30.590 23:06:23 -- common/autotest_common.sh@10 -- # set +x 00:36:30.861 ************************************ 00:36:30.861 START TEST nvmf_abort_qd_sizes 00:36:30.861 ************************************ 00:36:30.861 23:06:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:30.861 * Looking for test storage... 00:36:30.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:30.861 23:06:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:30.861 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:30.861 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:30.861 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:30.861 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:30.861 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:30.861 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:30.861 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:30.861 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:30.861 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:30.861 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:30.861 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:30.861 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:30.861 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:30.861 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:30.861 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:30.861 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:30.861 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:30.861 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:30.861 23:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:30.861 23:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:30.861 23:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:30.861 23:06:23 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.861 23:06:23 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.861 23:06:23 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.861 23:06:23 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:30.861 23:06:23 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.861 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:36:30.861 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:30.861 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:30.861 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:30.862 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:30.862 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:30.862 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:30.862 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:30.862 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:30.862 23:06:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:30.862 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:30.862 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:30.862 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:30.862 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:30.862 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:30.862 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:30.862 23:06:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:30.862 23:06:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:30.862 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:30.862 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:30.862 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:36:30.862 23:06:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:32.767 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:32.767 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:32.767 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:32.767 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:32.767 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:32.768 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:32.768 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:32.768 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:32.768 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:32.768 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:32.768 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:32.768 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:32.768 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:32.768 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:32.768 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:32.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:32.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:36:32.768 00:36:32.768 --- 10.0.0.2 ping statistics --- 00:36:32.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:32.768 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:36:32.768 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:32.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:32.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:36:32.768 00:36:32.768 --- 10.0.0.1 ping statistics --- 00:36:32.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:32.768 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:36:32.768 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:32.768 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:36:32.768 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:32.768 23:06:25 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:34.141 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:34.141 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:34.141 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:34.141 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:34.141 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:34.141 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:34.141 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:34.141 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:34.141 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:34.141 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:34.141 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:34.141 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:34.141 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:34.141 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:34.141 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:34.141 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:35.080 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:35.080 23:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:35.080 23:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:35.080 23:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:35.080 23:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:35.080 23:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:35.080 23:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:35.080 23:06:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:35.080 23:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:35.080 23:06:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:35.080 23:06:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:35.080 23:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3720413 00:36:35.080 23:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:35.080 23:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3720413 00:36:35.080 23:06:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 3720413 ']' 00:36:35.080 23:06:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:35.080 23:06:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:35.080 23:06:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:35.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:35.080 23:06:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:35.080 23:06:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:35.080 [2024-07-26 23:06:27.546155] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:36:35.080 [2024-07-26 23:06:27.546240] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:35.080 EAL: No free 2048 kB hugepages reported on node 1 00:36:35.340 [2024-07-26 23:06:27.620544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:35.340 [2024-07-26 23:06:27.718794] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:35.340 [2024-07-26 23:06:27.718855] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:35.340 [2024-07-26 23:06:27.718900] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:35.340 [2024-07-26 23:06:27.718922] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:35.340 [2024-07-26 23:06:27.718939] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:35.340 [2024-07-26 23:06:27.719002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:35.340 [2024-07-26 23:06:27.719074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:35.340 [2024-07-26 23:06:27.719129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:35.340 [2024-07-26 23:06:27.719135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:35.599 23:06:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:35.599 23:06:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:36:35.599 23:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:35.599 23:06:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:35.599 23:06:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:35.599 23:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:35.599 23:06:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:35.599 23:06:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:35.599 23:06:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:35.599 23:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:36:35.599 23:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:36:35.599 23:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:36:35.599 23:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:35.599 23:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:35.599 23:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:36:35.599 23:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:36:35.599 23:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:35.599 23:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:35.599 23:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:36:35.599 23:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:36:35.599 23:06:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:35.599 23:06:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:36:35.599 23:06:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:35.599 23:06:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:35.599 23:06:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:35.599 23:06:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:35.599 ************************************ 00:36:35.599 START TEST spdk_target_abort 00:36:35.600 ************************************ 00:36:35.600 23:06:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:36:35.600 23:06:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:35.600 23:06:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:36:35.600 23:06:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.600 23:06:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:38.887 spdk_targetn1 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:38.887 [2024-07-26 23:06:30.762945] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:38.887 [2024-07-26 23:06:30.795223] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:38.887 23:06:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:38.887 EAL: No free 2048 kB hugepages reported on node 1 00:36:42.172 Initializing NVMe Controllers 00:36:42.172 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:42.172 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:42.172 Initialization complete. Launching workers. 00:36:42.172 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10697, failed: 0 00:36:42.172 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1246, failed to submit 9451 00:36:42.172 success 848, unsuccess 398, failed 0 00:36:42.172 23:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:42.172 23:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:42.172 EAL: No free 2048 kB hugepages reported on node 1 00:36:45.462 Initializing NVMe Controllers 00:36:45.462 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:45.462 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:45.462 Initialization complete. Launching workers. 00:36:45.462 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8570, failed: 0 00:36:45.462 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1217, failed to submit 7353 00:36:45.462 success 345, unsuccess 872, failed 0 00:36:45.462 23:06:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:45.462 23:06:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:45.462 EAL: No free 2048 kB hugepages reported on node 1 00:36:48.751 Initializing NVMe Controllers 00:36:48.751 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:48.751 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:48.751 Initialization complete. Launching workers. 00:36:48.751 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30446, failed: 0 00:36:48.751 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2684, failed to submit 27762 00:36:48.751 success 501, unsuccess 2183, failed 0 00:36:48.751 23:06:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:48.751 23:06:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:48.751 23:06:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:48.751 23:06:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:48.751 23:06:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:48.751 23:06:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:48.751 23:06:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:49.690 23:06:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:49.690 23:06:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3720413 00:36:49.690 23:06:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 3720413 ']' 00:36:49.690 23:06:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 3720413 00:36:49.690 23:06:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:36:49.690 23:06:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:49.690 23:06:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3720413 00:36:49.690 23:06:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:49.690 23:06:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:49.690 23:06:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3720413' 00:36:49.690 killing process with pid 3720413 00:36:49.690 23:06:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 3720413 00:36:49.690 23:06:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 3720413 00:36:49.690 00:36:49.690 real 0m14.246s 00:36:49.690 user 0m53.766s 00:36:49.690 sys 0m2.829s 00:36:49.690 23:06:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:49.690 23:06:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:49.690 ************************************ 00:36:49.690 END TEST spdk_target_abort 00:36:49.690 ************************************ 00:36:49.690 23:06:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:49.690 23:06:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:49.690 23:06:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:49.690 23:06:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:49.948 ************************************ 00:36:49.948 START TEST kernel_target_abort 00:36:49.948 ************************************ 00:36:49.948 23:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:36:49.948 23:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:49.948 23:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:36:49.948 23:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:49.948 23:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:49.948 23:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:49.948 23:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:49.948 23:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:49.948 23:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:49.948 23:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:49.948 23:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:49.948 23:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:49.949 23:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:49.949 23:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:49.949 23:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:49.949 23:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:49.949 23:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:49.949 23:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:49.949 23:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:36:49.949 23:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:49.949 23:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:49.949 23:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:49.949 23:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:50.886 Waiting for block devices as requested 00:36:50.886 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:50.886 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:51.143 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:51.143 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:51.143 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:51.401 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:51.401 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:51.401 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:51.401 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:51.660 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:51.660 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:51.660 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:51.944 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:51.944 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:51.944 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:51.944 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:52.213 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:52.213 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:52.213 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:52.213 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:52.213 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:36:52.213 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:52.213 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:36:52.213 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:52.213 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:52.213 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:52.213 No valid GPT data, bailing 00:36:52.213 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:52.213 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:36:52.213 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:36:52.213 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:52.213 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:52.213 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:52.213 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:52.213 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:52.213 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:52.213 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:36:52.213 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:52.213 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:36:52.213 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:52.213 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:36:52.213 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:36:52.213 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:36:52.213 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:52.213 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:36:52.474 00:36:52.474 Discovery Log Number of Records 2, Generation counter 2 00:36:52.474 =====Discovery Log Entry 0====== 00:36:52.474 trtype: tcp 00:36:52.474 adrfam: ipv4 00:36:52.474 subtype: current discovery subsystem 00:36:52.474 treq: not specified, sq flow control disable supported 00:36:52.474 portid: 1 00:36:52.474 trsvcid: 4420 00:36:52.474 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:52.474 traddr: 10.0.0.1 00:36:52.474 eflags: none 00:36:52.474 sectype: none 00:36:52.474 =====Discovery Log Entry 1====== 00:36:52.474 trtype: tcp 00:36:52.474 adrfam: ipv4 00:36:52.474 subtype: nvme subsystem 00:36:52.474 treq: not specified, sq flow control disable supported 00:36:52.474 portid: 1 00:36:52.474 trsvcid: 4420 00:36:52.474 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:52.474 traddr: 10.0.0.1 00:36:52.474 eflags: none 00:36:52.474 sectype: none 00:36:52.474 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:52.474 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:52.474 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:52.474 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:52.474 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:52.474 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:52.474 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:52.474 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:52.474 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:52.474 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:52.474 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:52.474 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:52.474 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:52.474 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:52.474 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:52.474 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:52.474 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:52.474 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:52.474 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:52.474 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:52.474 23:06:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:52.474 EAL: No free 2048 kB hugepages reported on node 1 00:36:55.758 Initializing NVMe Controllers 00:36:55.758 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:55.758 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:55.758 Initialization complete. Launching workers. 00:36:55.758 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29612, failed: 0 00:36:55.758 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29612, failed to submit 0 00:36:55.758 success 0, unsuccess 29612, failed 0 00:36:55.758 23:06:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:55.758 23:06:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:55.758 EAL: No free 2048 kB hugepages reported on node 1 00:36:59.039 Initializing NVMe Controllers 00:36:59.039 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:59.039 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:59.039 Initialization complete. Launching workers. 00:36:59.039 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 59029, failed: 0 00:36:59.039 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14870, failed to submit 44159 00:36:59.039 success 0, unsuccess 14870, failed 0 00:36:59.039 23:06:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:59.039 23:06:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:59.039 EAL: No free 2048 kB hugepages reported on node 1 00:37:01.574 Initializing NVMe Controllers 00:37:01.574 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:01.574 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:01.574 Initialization complete. Launching workers. 00:37:01.574 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 58087, failed: 0 00:37:01.574 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14482, failed to submit 43605 00:37:01.574 success 0, unsuccess 14482, failed 0 00:37:01.574 23:06:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:01.574 23:06:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:01.574 23:06:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:37:01.574 23:06:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:01.574 23:06:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:01.574 23:06:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:01.574 23:06:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:01.574 23:06:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:37:01.574 23:06:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:37:01.833 23:06:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:02.768 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:37:02.768 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:37:02.768 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:37:02.768 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:37:02.768 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:37:02.768 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:37:02.768 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:37:02.768 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:37:02.768 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:37:02.768 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:37:02.768 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:37:02.768 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:37:02.768 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:37:02.768 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:37:02.768 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:37:02.768 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:37:03.704 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:37:03.962 00:37:03.962 real 0m14.062s 00:37:03.962 user 0m4.678s 00:37:03.962 sys 0m3.289s 00:37:03.962 23:06:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:03.963 23:06:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:03.963 ************************************ 00:37:03.963 END TEST kernel_target_abort 00:37:03.963 ************************************ 00:37:03.963 23:06:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:03.963 23:06:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:03.963 23:06:56 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:03.963 23:06:56 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:37:03.963 23:06:56 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:03.963 23:06:56 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:37:03.963 23:06:56 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:03.963 23:06:56 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:03.963 rmmod nvme_tcp 00:37:03.963 rmmod nvme_fabrics 00:37:03.963 rmmod nvme_keyring 00:37:03.963 23:06:56 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:03.963 23:06:56 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:37:03.963 23:06:56 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:37:03.963 23:06:56 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3720413 ']' 00:37:03.963 23:06:56 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3720413 00:37:03.963 23:06:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 3720413 ']' 00:37:03.963 23:06:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 3720413 00:37:03.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3720413) - No such process 00:37:03.963 23:06:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 3720413 is not found' 00:37:03.963 Process with pid 3720413 is not found 00:37:03.963 23:06:56 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:37:03.963 23:06:56 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:04.897 Waiting for block devices as requested 00:37:05.155 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:37:05.155 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:37:05.155 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:37:05.414 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:37:05.414 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:37:05.414 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:37:05.414 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:37:05.675 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:37:05.675 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:37:05.675 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:37:05.675 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:37:05.933 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:37:05.933 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:37:05.933 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:37:06.192 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:37:06.192 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:37:06.192 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:37:06.452 23:06:58 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:06.452 23:06:58 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:06.452 23:06:58 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:06.452 23:06:58 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:06.452 23:06:58 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:06.452 23:06:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:06.452 23:06:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:08.356 23:07:00 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:08.356 00:37:08.356 real 0m37.685s 00:37:08.356 user 1m0.505s 00:37:08.356 sys 0m9.466s 00:37:08.356 23:07:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:08.356 23:07:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:08.356 ************************************ 00:37:08.356 END TEST nvmf_abort_qd_sizes 00:37:08.356 ************************************ 00:37:08.356 23:07:00 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:08.356 23:07:00 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:08.356 23:07:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:08.356 23:07:00 -- common/autotest_common.sh@10 -- # set +x 00:37:08.356 ************************************ 00:37:08.356 START TEST keyring_file 00:37:08.356 ************************************ 00:37:08.356 23:07:00 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:08.615 * Looking for test storage... 00:37:08.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:08.615 23:07:00 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:08.615 23:07:00 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:08.615 23:07:00 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:08.615 23:07:00 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:08.615 23:07:00 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:08.615 23:07:00 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:08.615 23:07:00 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:08.615 23:07:00 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:08.615 23:07:00 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:08.615 23:07:00 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:08.615 23:07:00 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:08.615 23:07:00 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:08.615 23:07:00 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:08.615 23:07:00 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:08.615 23:07:00 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:08.615 23:07:00 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:08.615 23:07:00 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:08.615 23:07:00 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:08.615 23:07:00 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:08.615 23:07:00 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:08.615 23:07:00 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:08.615 23:07:00 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:08.615 23:07:00 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:08.615 23:07:00 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.615 23:07:00 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.615 23:07:00 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.615 23:07:00 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:08.615 23:07:00 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.615 23:07:00 keyring_file -- nvmf/common.sh@47 -- # : 0 00:37:08.615 23:07:00 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:08.615 23:07:00 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:08.615 23:07:00 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:08.615 23:07:00 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:08.615 23:07:00 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:08.615 23:07:00 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:08.615 23:07:00 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:08.615 23:07:00 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:08.615 23:07:00 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:08.615 23:07:00 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:08.615 23:07:00 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:08.615 23:07:00 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:08.615 23:07:00 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:08.615 23:07:00 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:08.615 23:07:00 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:08.615 23:07:00 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:08.615 23:07:00 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:08.615 23:07:00 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:08.615 23:07:00 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:08.615 23:07:00 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:08.615 23:07:00 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.lMcpvUDqUq 00:37:08.615 23:07:00 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:08.615 23:07:00 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:08.615 23:07:00 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:08.615 23:07:00 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:08.615 23:07:00 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:08.615 23:07:00 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:08.615 23:07:00 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:08.615 23:07:00 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.lMcpvUDqUq 00:37:08.615 23:07:00 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.lMcpvUDqUq 00:37:08.615 23:07:00 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.lMcpvUDqUq 00:37:08.615 23:07:00 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:08.615 23:07:00 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:08.615 23:07:00 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:08.615 23:07:00 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:08.615 23:07:00 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:08.615 23:07:00 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:08.615 23:07:00 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.B9HVq3Kl3E 00:37:08.615 23:07:00 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:08.615 23:07:00 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:08.616 23:07:00 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:08.616 23:07:00 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:08.616 23:07:00 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:08.616 23:07:00 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:08.616 23:07:00 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:08.616 23:07:00 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.B9HVq3Kl3E 00:37:08.616 23:07:00 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.B9HVq3Kl3E 00:37:08.616 23:07:00 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.B9HVq3Kl3E 00:37:08.616 23:07:00 keyring_file -- keyring/file.sh@30 -- # tgtpid=3726165 00:37:08.616 23:07:00 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:08.616 23:07:00 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3726165 00:37:08.616 23:07:00 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3726165 ']' 00:37:08.616 23:07:00 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:08.616 23:07:00 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:08.616 23:07:00 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:08.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:08.616 23:07:00 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:08.616 23:07:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:08.616 [2024-07-26 23:07:01.032695] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:37:08.616 [2024-07-26 23:07:01.032784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3726165 ] 00:37:08.616 EAL: No free 2048 kB hugepages reported on node 1 00:37:08.616 [2024-07-26 23:07:01.090255] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:08.874 [2024-07-26 23:07:01.181106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:09.132 23:07:01 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:09.132 23:07:01 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:37:09.132 23:07:01 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:09.132 23:07:01 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:09.132 23:07:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:09.132 [2024-07-26 23:07:01.424817] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:09.132 null0 00:37:09.132 [2024-07-26 23:07:01.456875] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:09.132 [2024-07-26 23:07:01.457393] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:09.132 [2024-07-26 23:07:01.464891] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:37:09.132 23:07:01 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:09.132 23:07:01 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:09.132 23:07:01 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:09.132 23:07:01 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:09.132 23:07:01 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:37:09.132 23:07:01 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:09.132 23:07:01 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:37:09.132 23:07:01 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:09.132 23:07:01 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:09.132 23:07:01 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:09.132 23:07:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:09.132 [2024-07-26 23:07:01.476919] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:09.132 request: 00:37:09.132 { 00:37:09.132 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:09.132 "secure_channel": false, 00:37:09.132 "listen_address": { 00:37:09.132 "trtype": "tcp", 00:37:09.132 "traddr": "127.0.0.1", 00:37:09.132 "trsvcid": "4420" 00:37:09.132 }, 00:37:09.132 "method": "nvmf_subsystem_add_listener", 00:37:09.132 "req_id": 1 00:37:09.132 } 00:37:09.132 Got JSON-RPC error response 00:37:09.132 response: 00:37:09.132 { 00:37:09.132 "code": -32602, 00:37:09.132 "message": "Invalid parameters" 00:37:09.132 } 00:37:09.132 23:07:01 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:37:09.132 23:07:01 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:09.133 23:07:01 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:09.133 23:07:01 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:09.133 23:07:01 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:09.133 23:07:01 keyring_file -- keyring/file.sh@46 -- # bperfpid=3726179 00:37:09.133 23:07:01 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:09.133 23:07:01 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3726179 /var/tmp/bperf.sock 00:37:09.133 23:07:01 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3726179 ']' 00:37:09.133 23:07:01 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:09.133 23:07:01 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:09.133 23:07:01 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:09.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:09.133 23:07:01 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:09.133 23:07:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:09.133 [2024-07-26 23:07:01.523502] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:37:09.133 [2024-07-26 23:07:01.523564] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3726179 ] 00:37:09.133 EAL: No free 2048 kB hugepages reported on node 1 00:37:09.133 [2024-07-26 23:07:01.583679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:09.391 [2024-07-26 23:07:01.674879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:09.391 23:07:01 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:09.391 23:07:01 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:37:09.391 23:07:01 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lMcpvUDqUq 00:37:09.391 23:07:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lMcpvUDqUq 00:37:09.649 23:07:02 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.B9HVq3Kl3E 00:37:09.649 23:07:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.B9HVq3Kl3E 00:37:09.907 23:07:02 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:37:09.908 23:07:02 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:37:09.908 23:07:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:09.908 23:07:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:09.908 23:07:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:10.167 23:07:02 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.lMcpvUDqUq == \/\t\m\p\/\t\m\p\.\l\M\c\p\v\U\D\q\U\q ]] 00:37:10.167 23:07:02 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:37:10.167 23:07:02 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:10.167 23:07:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:10.167 23:07:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:10.167 23:07:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:10.426 23:07:02 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.B9HVq3Kl3E == \/\t\m\p\/\t\m\p\.\B\9\H\V\q\3\K\l\3\E ]] 00:37:10.426 23:07:02 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:37:10.426 23:07:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:10.426 23:07:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:10.426 23:07:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:10.426 23:07:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:10.426 23:07:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:10.684 23:07:03 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:37:10.684 23:07:03 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:37:10.684 23:07:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:10.684 23:07:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:10.684 23:07:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:10.684 23:07:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:10.684 23:07:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:10.973 23:07:03 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:10.973 23:07:03 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:10.973 23:07:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:11.238 [2024-07-26 23:07:03.509793] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:11.238 nvme0n1 00:37:11.238 23:07:03 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:37:11.238 23:07:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:11.238 23:07:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:11.238 23:07:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:11.238 23:07:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:11.238 23:07:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:11.495 23:07:03 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:37:11.495 23:07:03 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:37:11.495 23:07:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:11.495 23:07:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:11.495 23:07:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:11.495 23:07:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:11.495 23:07:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:11.752 23:07:04 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:37:11.752 23:07:04 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:11.752 Running I/O for 1 seconds... 00:37:13.130 00:37:13.130 Latency(us) 00:37:13.130 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:13.130 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:13.130 nvme0n1 : 1.03 4443.99 17.36 0.00 0.00 28420.33 5048.70 35535.08 00:37:13.130 =================================================================================================================== 00:37:13.130 Total : 4443.99 17.36 0.00 0.00 28420.33 5048.70 35535.08 00:37:13.130 0 00:37:13.130 23:07:05 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:13.130 23:07:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:13.130 23:07:05 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:37:13.130 23:07:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:13.130 23:07:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:13.130 23:07:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:13.130 23:07:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:13.130 23:07:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:13.386 23:07:05 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:37:13.386 23:07:05 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:37:13.387 23:07:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:13.387 23:07:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:13.387 23:07:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:13.387 23:07:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:13.387 23:07:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:13.644 23:07:06 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:13.644 23:07:06 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:13.644 23:07:06 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:13.644 23:07:06 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:13.644 23:07:06 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:13.644 23:07:06 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:13.644 23:07:06 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:13.644 23:07:06 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:13.644 23:07:06 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:13.644 23:07:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:13.901 [2024-07-26 23:07:06.260031] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:13.901 [2024-07-26 23:07:06.260440] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd5310 (107): Transport endpoint is not connected 00:37:13.901 [2024-07-26 23:07:06.261438] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd5310 (9): Bad file descriptor 00:37:13.901 [2024-07-26 23:07:06.262429] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:13.901 [2024-07-26 23:07:06.262453] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:13.901 [2024-07-26 23:07:06.262480] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:13.901 request: 00:37:13.901 { 00:37:13.901 "name": "nvme0", 00:37:13.901 "trtype": "tcp", 00:37:13.901 "traddr": "127.0.0.1", 00:37:13.901 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:13.901 "adrfam": "ipv4", 00:37:13.901 "trsvcid": "4420", 00:37:13.901 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:13.901 "psk": "key1", 00:37:13.901 "method": "bdev_nvme_attach_controller", 00:37:13.901 "req_id": 1 00:37:13.901 } 00:37:13.901 Got JSON-RPC error response 00:37:13.901 response: 00:37:13.901 { 00:37:13.901 "code": -5, 00:37:13.901 "message": "Input/output error" 00:37:13.901 } 00:37:13.901 23:07:06 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:13.901 23:07:06 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:13.901 23:07:06 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:13.901 23:07:06 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:13.901 23:07:06 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:37:13.901 23:07:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:13.901 23:07:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:13.901 23:07:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:13.901 23:07:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:13.901 23:07:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:14.159 23:07:06 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:37:14.159 23:07:06 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:37:14.159 23:07:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:14.159 23:07:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:14.159 23:07:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:14.159 23:07:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:14.159 23:07:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:14.417 23:07:06 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:14.417 23:07:06 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:37:14.417 23:07:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:14.674 23:07:07 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:37:14.674 23:07:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:14.932 23:07:07 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:37:14.932 23:07:07 keyring_file -- keyring/file.sh@77 -- # jq length 00:37:14.932 23:07:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:15.189 23:07:07 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:37:15.189 23:07:07 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.lMcpvUDqUq 00:37:15.189 23:07:07 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.lMcpvUDqUq 00:37:15.189 23:07:07 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:15.189 23:07:07 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.lMcpvUDqUq 00:37:15.189 23:07:07 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:15.189 23:07:07 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:15.189 23:07:07 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:15.189 23:07:07 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:15.189 23:07:07 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lMcpvUDqUq 00:37:15.189 23:07:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lMcpvUDqUq 00:37:15.447 [2024-07-26 23:07:07.766436] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.lMcpvUDqUq': 0100660 00:37:15.447 [2024-07-26 23:07:07.766474] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:15.447 request: 00:37:15.447 { 00:37:15.447 "name": "key0", 00:37:15.447 "path": "/tmp/tmp.lMcpvUDqUq", 00:37:15.447 "method": "keyring_file_add_key", 00:37:15.447 "req_id": 1 00:37:15.447 } 00:37:15.447 Got JSON-RPC error response 00:37:15.447 response: 00:37:15.447 { 00:37:15.447 "code": -1, 00:37:15.447 "message": "Operation not permitted" 00:37:15.447 } 00:37:15.447 23:07:07 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:15.447 23:07:07 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:15.447 23:07:07 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:15.447 23:07:07 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:15.447 23:07:07 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.lMcpvUDqUq 00:37:15.447 23:07:07 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lMcpvUDqUq 00:37:15.447 23:07:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lMcpvUDqUq 00:37:15.705 23:07:08 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.lMcpvUDqUq 00:37:15.705 23:07:08 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:37:15.705 23:07:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:15.705 23:07:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:15.705 23:07:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:15.705 23:07:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:15.705 23:07:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:15.963 23:07:08 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:37:15.963 23:07:08 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:15.963 23:07:08 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:15.963 23:07:08 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:15.963 23:07:08 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:15.963 23:07:08 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:15.963 23:07:08 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:15.963 23:07:08 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:15.963 23:07:08 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:15.963 23:07:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:16.221 [2024-07-26 23:07:08.500445] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.lMcpvUDqUq': No such file or directory 00:37:16.221 [2024-07-26 23:07:08.500483] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:16.221 [2024-07-26 23:07:08.500519] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:16.221 [2024-07-26 23:07:08.500530] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:16.221 [2024-07-26 23:07:08.500542] bdev_nvme.c:6269:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:16.221 request: 00:37:16.221 { 00:37:16.221 "name": "nvme0", 00:37:16.221 "trtype": "tcp", 00:37:16.221 "traddr": "127.0.0.1", 00:37:16.221 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:16.221 "adrfam": "ipv4", 00:37:16.221 "trsvcid": "4420", 00:37:16.221 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:16.221 "psk": "key0", 00:37:16.221 "method": "bdev_nvme_attach_controller", 00:37:16.221 "req_id": 1 00:37:16.221 } 00:37:16.221 Got JSON-RPC error response 00:37:16.221 response: 00:37:16.221 { 00:37:16.221 "code": -19, 00:37:16.221 "message": "No such device" 00:37:16.221 } 00:37:16.221 23:07:08 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:16.221 23:07:08 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:16.221 23:07:08 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:16.221 23:07:08 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:16.221 23:07:08 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:37:16.221 23:07:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:16.478 23:07:08 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:16.478 23:07:08 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:16.478 23:07:08 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:16.478 23:07:08 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:16.478 23:07:08 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:16.478 23:07:08 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:16.478 23:07:08 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Wsq9Ek6mxw 00:37:16.478 23:07:08 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:16.478 23:07:08 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:16.478 23:07:08 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:16.478 23:07:08 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:16.478 23:07:08 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:16.478 23:07:08 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:16.479 23:07:08 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:16.479 23:07:08 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Wsq9Ek6mxw 00:37:16.479 23:07:08 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Wsq9Ek6mxw 00:37:16.479 23:07:08 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.Wsq9Ek6mxw 00:37:16.479 23:07:08 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Wsq9Ek6mxw 00:37:16.479 23:07:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Wsq9Ek6mxw 00:37:16.736 23:07:09 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:16.736 23:07:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:16.993 nvme0n1 00:37:16.993 23:07:09 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:37:16.993 23:07:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:16.993 23:07:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:16.993 23:07:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:16.993 23:07:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:16.993 23:07:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:17.250 23:07:09 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:37:17.250 23:07:09 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:37:17.250 23:07:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:17.507 23:07:09 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:37:17.508 23:07:09 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:37:17.508 23:07:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:17.508 23:07:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:17.508 23:07:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:17.765 23:07:10 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:37:17.765 23:07:10 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:37:17.765 23:07:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:17.765 23:07:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:17.765 23:07:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:17.765 23:07:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:17.765 23:07:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:18.022 23:07:10 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:37:18.023 23:07:10 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:18.023 23:07:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:18.280 23:07:10 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:37:18.280 23:07:10 keyring_file -- keyring/file.sh@104 -- # jq length 00:37:18.280 23:07:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:18.538 23:07:10 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:37:18.538 23:07:10 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Wsq9Ek6mxw 00:37:18.538 23:07:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Wsq9Ek6mxw 00:37:18.795 23:07:11 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.B9HVq3Kl3E 00:37:18.795 23:07:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.B9HVq3Kl3E 00:37:19.052 23:07:11 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:19.052 23:07:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:19.309 nvme0n1 00:37:19.309 23:07:11 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:37:19.309 23:07:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:19.567 23:07:12 keyring_file -- keyring/file.sh@112 -- # config='{ 00:37:19.567 "subsystems": [ 00:37:19.567 { 00:37:19.567 "subsystem": "keyring", 00:37:19.567 "config": [ 00:37:19.567 { 00:37:19.567 "method": "keyring_file_add_key", 00:37:19.567 "params": { 00:37:19.567 "name": "key0", 00:37:19.567 "path": "/tmp/tmp.Wsq9Ek6mxw" 00:37:19.567 } 00:37:19.567 }, 00:37:19.567 { 00:37:19.567 "method": "keyring_file_add_key", 00:37:19.567 "params": { 00:37:19.567 "name": "key1", 00:37:19.567 "path": "/tmp/tmp.B9HVq3Kl3E" 00:37:19.567 } 00:37:19.567 } 00:37:19.567 ] 00:37:19.567 }, 00:37:19.567 { 00:37:19.567 "subsystem": "iobuf", 00:37:19.567 "config": [ 00:37:19.567 { 00:37:19.567 "method": "iobuf_set_options", 00:37:19.567 "params": { 00:37:19.567 "small_pool_count": 8192, 00:37:19.567 "large_pool_count": 1024, 00:37:19.567 "small_bufsize": 8192, 00:37:19.567 "large_bufsize": 135168 00:37:19.567 } 00:37:19.567 } 00:37:19.567 ] 00:37:19.567 }, 00:37:19.567 { 00:37:19.567 "subsystem": "sock", 00:37:19.567 "config": [ 00:37:19.567 { 00:37:19.567 "method": "sock_set_default_impl", 00:37:19.567 "params": { 00:37:19.567 "impl_name": "posix" 00:37:19.567 } 00:37:19.567 }, 00:37:19.567 { 00:37:19.567 "method": "sock_impl_set_options", 00:37:19.567 "params": { 00:37:19.567 "impl_name": "ssl", 00:37:19.567 "recv_buf_size": 4096, 00:37:19.567 "send_buf_size": 4096, 00:37:19.567 "enable_recv_pipe": true, 00:37:19.567 "enable_quickack": false, 00:37:19.567 "enable_placement_id": 0, 00:37:19.567 "enable_zerocopy_send_server": true, 00:37:19.567 "enable_zerocopy_send_client": false, 00:37:19.567 "zerocopy_threshold": 0, 00:37:19.567 "tls_version": 0, 00:37:19.567 "enable_ktls": false 00:37:19.567 } 00:37:19.567 }, 00:37:19.567 { 00:37:19.567 "method": "sock_impl_set_options", 00:37:19.567 "params": { 00:37:19.567 "impl_name": "posix", 00:37:19.567 "recv_buf_size": 2097152, 00:37:19.567 "send_buf_size": 2097152, 00:37:19.567 "enable_recv_pipe": true, 00:37:19.567 "enable_quickack": false, 00:37:19.567 "enable_placement_id": 0, 00:37:19.567 "enable_zerocopy_send_server": true, 00:37:19.567 "enable_zerocopy_send_client": false, 00:37:19.567 "zerocopy_threshold": 0, 00:37:19.567 "tls_version": 0, 00:37:19.567 "enable_ktls": false 00:37:19.567 } 00:37:19.567 } 00:37:19.567 ] 00:37:19.567 }, 00:37:19.567 { 00:37:19.567 "subsystem": "vmd", 00:37:19.567 "config": [] 00:37:19.567 }, 00:37:19.567 { 00:37:19.567 "subsystem": "accel", 00:37:19.567 "config": [ 00:37:19.567 { 00:37:19.567 "method": "accel_set_options", 00:37:19.567 "params": { 00:37:19.567 "small_cache_size": 128, 00:37:19.567 "large_cache_size": 16, 00:37:19.567 "task_count": 2048, 00:37:19.567 "sequence_count": 2048, 00:37:19.567 "buf_count": 2048 00:37:19.567 } 00:37:19.567 } 00:37:19.567 ] 00:37:19.567 }, 00:37:19.567 { 00:37:19.567 "subsystem": "bdev", 00:37:19.567 "config": [ 00:37:19.567 { 00:37:19.567 "method": "bdev_set_options", 00:37:19.567 "params": { 00:37:19.567 "bdev_io_pool_size": 65535, 00:37:19.567 "bdev_io_cache_size": 256, 00:37:19.567 "bdev_auto_examine": true, 00:37:19.567 "iobuf_small_cache_size": 128, 00:37:19.567 "iobuf_large_cache_size": 16 00:37:19.567 } 00:37:19.567 }, 00:37:19.567 { 00:37:19.567 "method": "bdev_raid_set_options", 00:37:19.567 "params": { 00:37:19.567 "process_window_size_kb": 1024 00:37:19.567 } 00:37:19.567 }, 00:37:19.567 { 00:37:19.567 "method": "bdev_iscsi_set_options", 00:37:19.567 "params": { 00:37:19.567 "timeout_sec": 30 00:37:19.567 } 00:37:19.567 }, 00:37:19.567 { 00:37:19.567 "method": "bdev_nvme_set_options", 00:37:19.567 "params": { 00:37:19.567 "action_on_timeout": "none", 00:37:19.567 "timeout_us": 0, 00:37:19.567 "timeout_admin_us": 0, 00:37:19.567 "keep_alive_timeout_ms": 10000, 00:37:19.567 "arbitration_burst": 0, 00:37:19.567 "low_priority_weight": 0, 00:37:19.567 "medium_priority_weight": 0, 00:37:19.567 "high_priority_weight": 0, 00:37:19.567 "nvme_adminq_poll_period_us": 10000, 00:37:19.567 "nvme_ioq_poll_period_us": 0, 00:37:19.567 "io_queue_requests": 512, 00:37:19.567 "delay_cmd_submit": true, 00:37:19.567 "transport_retry_count": 4, 00:37:19.567 "bdev_retry_count": 3, 00:37:19.567 "transport_ack_timeout": 0, 00:37:19.567 "ctrlr_loss_timeout_sec": 0, 00:37:19.567 "reconnect_delay_sec": 0, 00:37:19.567 "fast_io_fail_timeout_sec": 0, 00:37:19.567 "disable_auto_failback": false, 00:37:19.567 "generate_uuids": false, 00:37:19.567 "transport_tos": 0, 00:37:19.567 "nvme_error_stat": false, 00:37:19.567 "rdma_srq_size": 0, 00:37:19.567 "io_path_stat": false, 00:37:19.567 "allow_accel_sequence": false, 00:37:19.567 "rdma_max_cq_size": 0, 00:37:19.567 "rdma_cm_event_timeout_ms": 0, 00:37:19.567 "dhchap_digests": [ 00:37:19.567 "sha256", 00:37:19.567 "sha384", 00:37:19.567 "sha512" 00:37:19.567 ], 00:37:19.567 "dhchap_dhgroups": [ 00:37:19.567 "null", 00:37:19.567 "ffdhe2048", 00:37:19.567 "ffdhe3072", 00:37:19.568 "ffdhe4096", 00:37:19.568 "ffdhe6144", 00:37:19.568 "ffdhe8192" 00:37:19.568 ] 00:37:19.568 } 00:37:19.568 }, 00:37:19.568 { 00:37:19.568 "method": "bdev_nvme_attach_controller", 00:37:19.568 "params": { 00:37:19.568 "name": "nvme0", 00:37:19.568 "trtype": "TCP", 00:37:19.568 "adrfam": "IPv4", 00:37:19.568 "traddr": "127.0.0.1", 00:37:19.568 "trsvcid": "4420", 00:37:19.568 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:19.568 "prchk_reftag": false, 00:37:19.568 "prchk_guard": false, 00:37:19.568 "ctrlr_loss_timeout_sec": 0, 00:37:19.568 "reconnect_delay_sec": 0, 00:37:19.568 "fast_io_fail_timeout_sec": 0, 00:37:19.568 "psk": "key0", 00:37:19.568 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:19.568 "hdgst": false, 00:37:19.568 "ddgst": false 00:37:19.568 } 00:37:19.568 }, 00:37:19.568 { 00:37:19.568 "method": "bdev_nvme_set_hotplug", 00:37:19.568 "params": { 00:37:19.568 "period_us": 100000, 00:37:19.568 "enable": false 00:37:19.568 } 00:37:19.568 }, 00:37:19.568 { 00:37:19.568 "method": "bdev_wait_for_examine" 00:37:19.568 } 00:37:19.568 ] 00:37:19.568 }, 00:37:19.568 { 00:37:19.568 "subsystem": "nbd", 00:37:19.568 "config": [] 00:37:19.568 } 00:37:19.568 ] 00:37:19.568 }' 00:37:19.568 23:07:12 keyring_file -- keyring/file.sh@114 -- # killprocess 3726179 00:37:19.568 23:07:12 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3726179 ']' 00:37:19.568 23:07:12 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3726179 00:37:19.568 23:07:12 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:19.568 23:07:12 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:19.568 23:07:12 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3726179 00:37:19.568 23:07:12 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:19.568 23:07:12 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:19.568 23:07:12 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3726179' 00:37:19.568 killing process with pid 3726179 00:37:19.568 23:07:12 keyring_file -- common/autotest_common.sh@965 -- # kill 3726179 00:37:19.568 Received shutdown signal, test time was about 1.000000 seconds 00:37:19.568 00:37:19.568 Latency(us) 00:37:19.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:19.568 =================================================================================================================== 00:37:19.568 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:19.568 23:07:12 keyring_file -- common/autotest_common.sh@970 -- # wait 3726179 00:37:19.826 23:07:12 keyring_file -- keyring/file.sh@117 -- # bperfpid=3727638 00:37:19.826 23:07:12 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3727638 /var/tmp/bperf.sock 00:37:19.826 23:07:12 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3727638 ']' 00:37:19.826 23:07:12 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:19.826 23:07:12 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:19.826 23:07:12 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:19.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:19.826 23:07:12 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:19.826 23:07:12 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:19.826 23:07:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:19.826 23:07:12 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:37:19.826 "subsystems": [ 00:37:19.826 { 00:37:19.826 "subsystem": "keyring", 00:37:19.826 "config": [ 00:37:19.826 { 00:37:19.826 "method": "keyring_file_add_key", 00:37:19.826 "params": { 00:37:19.826 "name": "key0", 00:37:19.826 "path": "/tmp/tmp.Wsq9Ek6mxw" 00:37:19.826 } 00:37:19.826 }, 00:37:19.826 { 00:37:19.826 "method": "keyring_file_add_key", 00:37:19.826 "params": { 00:37:19.826 "name": "key1", 00:37:19.826 "path": "/tmp/tmp.B9HVq3Kl3E" 00:37:19.826 } 00:37:19.826 } 00:37:19.826 ] 00:37:19.826 }, 00:37:19.826 { 00:37:19.826 "subsystem": "iobuf", 00:37:19.826 "config": [ 00:37:19.826 { 00:37:19.826 "method": "iobuf_set_options", 00:37:19.826 "params": { 00:37:19.826 "small_pool_count": 8192, 00:37:19.826 "large_pool_count": 1024, 00:37:19.826 "small_bufsize": 8192, 00:37:19.826 "large_bufsize": 135168 00:37:19.826 } 00:37:19.826 } 00:37:19.826 ] 00:37:19.826 }, 00:37:19.826 { 00:37:19.826 "subsystem": "sock", 00:37:19.826 "config": [ 00:37:19.826 { 00:37:19.826 "method": "sock_set_default_impl", 00:37:19.826 "params": { 00:37:19.826 "impl_name": "posix" 00:37:19.826 } 00:37:19.826 }, 00:37:19.826 { 00:37:19.826 "method": "sock_impl_set_options", 00:37:19.826 "params": { 00:37:19.826 "impl_name": "ssl", 00:37:19.826 "recv_buf_size": 4096, 00:37:19.826 "send_buf_size": 4096, 00:37:19.826 "enable_recv_pipe": true, 00:37:19.826 "enable_quickack": false, 00:37:19.826 "enable_placement_id": 0, 00:37:19.827 "enable_zerocopy_send_server": true, 00:37:19.827 "enable_zerocopy_send_client": false, 00:37:19.827 "zerocopy_threshold": 0, 00:37:19.827 "tls_version": 0, 00:37:19.827 "enable_ktls": false 00:37:19.827 } 00:37:19.827 }, 00:37:19.827 { 00:37:19.827 "method": "sock_impl_set_options", 00:37:19.827 "params": { 00:37:19.827 "impl_name": "posix", 00:37:19.827 "recv_buf_size": 2097152, 00:37:19.827 "send_buf_size": 2097152, 00:37:19.827 "enable_recv_pipe": true, 00:37:19.827 "enable_quickack": false, 00:37:19.827 "enable_placement_id": 0, 00:37:19.827 "enable_zerocopy_send_server": true, 00:37:19.827 "enable_zerocopy_send_client": false, 00:37:19.827 "zerocopy_threshold": 0, 00:37:19.827 "tls_version": 0, 00:37:19.827 "enable_ktls": false 00:37:19.827 } 00:37:19.827 } 00:37:19.827 ] 00:37:19.827 }, 00:37:19.827 { 00:37:19.827 "subsystem": "vmd", 00:37:19.827 "config": [] 00:37:19.827 }, 00:37:19.827 { 00:37:19.827 "subsystem": "accel", 00:37:19.827 "config": [ 00:37:19.827 { 00:37:19.827 "method": "accel_set_options", 00:37:19.827 "params": { 00:37:19.827 "small_cache_size": 128, 00:37:19.827 "large_cache_size": 16, 00:37:19.827 "task_count": 2048, 00:37:19.827 "sequence_count": 2048, 00:37:19.827 "buf_count": 2048 00:37:19.827 } 00:37:19.827 } 00:37:19.827 ] 00:37:19.827 }, 00:37:19.827 { 00:37:19.827 "subsystem": "bdev", 00:37:19.827 "config": [ 00:37:19.827 { 00:37:19.827 "method": "bdev_set_options", 00:37:19.827 "params": { 00:37:19.827 "bdev_io_pool_size": 65535, 00:37:19.827 "bdev_io_cache_size": 256, 00:37:19.827 "bdev_auto_examine": true, 00:37:19.827 "iobuf_small_cache_size": 128, 00:37:19.827 "iobuf_large_cache_size": 16 00:37:19.827 } 00:37:19.827 }, 00:37:19.827 { 00:37:19.827 "method": "bdev_raid_set_options", 00:37:19.827 "params": { 00:37:19.827 "process_window_size_kb": 1024 00:37:19.827 } 00:37:19.827 }, 00:37:19.827 { 00:37:19.827 "method": "bdev_iscsi_set_options", 00:37:19.827 "params": { 00:37:19.827 "timeout_sec": 30 00:37:19.827 } 00:37:19.827 }, 00:37:19.827 { 00:37:19.827 "method": "bdev_nvme_set_options", 00:37:19.827 "params": { 00:37:19.827 "action_on_timeout": "none", 00:37:19.827 "timeout_us": 0, 00:37:19.827 "timeout_admin_us": 0, 00:37:19.827 "keep_alive_timeout_ms": 10000, 00:37:19.827 "arbitration_burst": 0, 00:37:19.827 "low_priority_weight": 0, 00:37:19.827 "medium_priority_weight": 0, 00:37:19.827 "high_priority_weight": 0, 00:37:19.827 "nvme_adminq_poll_period_us": 10000, 00:37:19.827 "nvme_ioq_poll_period_us": 0, 00:37:19.827 "io_queue_requests": 512, 00:37:19.827 "delay_cmd_submit": true, 00:37:19.827 "transport_retry_count": 4, 00:37:19.827 "bdev_retry_count": 3, 00:37:19.827 "transport_ack_timeout": 0, 00:37:19.827 "ctrlr_loss_timeout_sec": 0, 00:37:19.827 "reconnect_delay_sec": 0, 00:37:19.827 "fast_io_fail_timeout_sec": 0, 00:37:19.827 "disable_auto_failback": false, 00:37:19.827 "generate_uuids": false, 00:37:19.827 "transport_tos": 0, 00:37:19.827 "nvme_error_stat": false, 00:37:19.827 "rdma_srq_size": 0, 00:37:19.827 "io_path_stat": false, 00:37:19.827 "allow_accel_sequence": false, 00:37:19.827 "rdma_max_cq_size": 0, 00:37:19.827 "rdma_cm_event_timeout_ms": 0, 00:37:19.827 "dhchap_digests": [ 00:37:19.827 "sha256", 00:37:19.827 "sha384", 00:37:19.827 "sha512" 00:37:19.827 ], 00:37:19.827 "dhchap_dhgroups": [ 00:37:19.827 "null", 00:37:19.827 "ffdhe2048", 00:37:19.827 "ffdhe3072", 00:37:19.827 "ffdhe4096", 00:37:19.827 "ffdhe6144", 00:37:19.827 "ffdhe8192" 00:37:19.827 ] 00:37:19.827 } 00:37:19.827 }, 00:37:19.827 { 00:37:19.827 "method": "bdev_nvme_attach_controller", 00:37:19.827 "params": { 00:37:19.827 "name": "nvme0", 00:37:19.827 "trtype": "TCP", 00:37:19.827 "adrfam": "IPv4", 00:37:19.827 "traddr": "127.0.0.1", 00:37:19.827 "trsvcid": "4420", 00:37:19.827 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:19.827 "prchk_reftag": false, 00:37:19.827 "prchk_guard": false, 00:37:19.827 "ctrlr_loss_timeout_sec": 0, 00:37:19.827 "reconnect_delay_sec": 0, 00:37:19.827 "fast_io_fail_timeout_sec": 0, 00:37:19.827 "psk": "key0", 00:37:19.827 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:19.827 "hdgst": false, 00:37:19.827 "ddgst": false 00:37:19.827 } 00:37:19.827 }, 00:37:19.827 { 00:37:19.827 "method": "bdev_nvme_set_hotplug", 00:37:19.827 "params": { 00:37:19.827 "period_us": 100000, 00:37:19.827 "enable": false 00:37:19.827 } 00:37:19.827 }, 00:37:19.827 { 00:37:19.827 "method": "bdev_wait_for_examine" 00:37:19.827 } 00:37:19.827 ] 00:37:19.827 }, 00:37:19.827 { 00:37:19.827 "subsystem": "nbd", 00:37:19.827 "config": [] 00:37:19.827 } 00:37:19.827 ] 00:37:19.827 }' 00:37:19.827 [2024-07-26 23:07:12.293022] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:37:19.827 [2024-07-26 23:07:12.293128] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3727638 ] 00:37:19.827 EAL: No free 2048 kB hugepages reported on node 1 00:37:20.085 [2024-07-26 23:07:12.353691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:20.085 [2024-07-26 23:07:12.444070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:20.342 [2024-07-26 23:07:12.625374] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:20.907 23:07:13 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:20.907 23:07:13 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:37:20.907 23:07:13 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:37:20.907 23:07:13 keyring_file -- keyring/file.sh@120 -- # jq length 00:37:20.907 23:07:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:21.165 23:07:13 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:37:21.165 23:07:13 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:37:21.165 23:07:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:21.165 23:07:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:21.165 23:07:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:21.165 23:07:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:21.165 23:07:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:21.422 23:07:13 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:21.422 23:07:13 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:37:21.422 23:07:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:21.422 23:07:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:21.422 23:07:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:21.422 23:07:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:21.422 23:07:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:21.680 23:07:13 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:37:21.680 23:07:13 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:37:21.680 23:07:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:21.680 23:07:13 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:37:21.938 23:07:14 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:37:21.938 23:07:14 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:21.938 23:07:14 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.Wsq9Ek6mxw /tmp/tmp.B9HVq3Kl3E 00:37:21.938 23:07:14 keyring_file -- keyring/file.sh@20 -- # killprocess 3727638 00:37:21.938 23:07:14 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3727638 ']' 00:37:21.938 23:07:14 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3727638 00:37:21.938 23:07:14 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:21.938 23:07:14 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:21.938 23:07:14 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3727638 00:37:21.938 23:07:14 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:21.938 23:07:14 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:21.938 23:07:14 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3727638' 00:37:21.938 killing process with pid 3727638 00:37:21.938 23:07:14 keyring_file -- common/autotest_common.sh@965 -- # kill 3727638 00:37:21.938 Received shutdown signal, test time was about 1.000000 seconds 00:37:21.938 00:37:21.938 Latency(us) 00:37:21.938 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:21.938 =================================================================================================================== 00:37:21.938 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:21.938 23:07:14 keyring_file -- common/autotest_common.sh@970 -- # wait 3727638 00:37:22.195 23:07:14 keyring_file -- keyring/file.sh@21 -- # killprocess 3726165 00:37:22.195 23:07:14 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3726165 ']' 00:37:22.195 23:07:14 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3726165 00:37:22.195 23:07:14 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:22.195 23:07:14 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:22.195 23:07:14 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3726165 00:37:22.195 23:07:14 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:22.195 23:07:14 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:22.195 23:07:14 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3726165' 00:37:22.195 killing process with pid 3726165 00:37:22.195 23:07:14 keyring_file -- common/autotest_common.sh@965 -- # kill 3726165 00:37:22.195 [2024-07-26 23:07:14.492691] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:37:22.195 23:07:14 keyring_file -- common/autotest_common.sh@970 -- # wait 3726165 00:37:22.453 00:37:22.453 real 0m14.034s 00:37:22.453 user 0m34.705s 00:37:22.453 sys 0m3.228s 00:37:22.453 23:07:14 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:22.453 23:07:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:22.453 ************************************ 00:37:22.453 END TEST keyring_file 00:37:22.453 ************************************ 00:37:22.453 23:07:14 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:37:22.453 23:07:14 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:22.453 23:07:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:22.453 23:07:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:22.453 23:07:14 -- common/autotest_common.sh@10 -- # set +x 00:37:22.453 ************************************ 00:37:22.453 START TEST keyring_linux 00:37:22.453 ************************************ 00:37:22.453 23:07:14 keyring_linux -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:22.453 * Looking for test storage... 00:37:22.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:22.453 23:07:14 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:22.453 23:07:14 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:22.453 23:07:14 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:22.453 23:07:14 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:22.453 23:07:14 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:22.453 23:07:14 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:22.453 23:07:14 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:22.453 23:07:14 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:22.453 23:07:14 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:22.453 23:07:14 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:22.453 23:07:14 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:22.453 23:07:14 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:22.453 23:07:14 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:22.453 23:07:14 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:22.453 23:07:14 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:22.453 23:07:14 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:22.453 23:07:14 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:22.453 23:07:14 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:22.453 23:07:14 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:22.453 23:07:14 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:22.453 23:07:14 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:22.453 23:07:14 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:22.453 23:07:14 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:22.453 23:07:14 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.453 23:07:14 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.453 23:07:14 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.453 23:07:14 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:22.453 23:07:14 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.453 23:07:14 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:37:22.453 23:07:14 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:22.453 23:07:14 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:22.453 23:07:14 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:22.453 23:07:14 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:22.453 23:07:14 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:22.453 23:07:14 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:22.453 23:07:14 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:22.453 23:07:14 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:22.453 23:07:14 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:22.453 23:07:14 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:22.453 23:07:14 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:22.711 23:07:14 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:22.711 23:07:14 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:22.711 23:07:14 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:22.711 23:07:14 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:22.711 23:07:14 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:22.711 23:07:14 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:22.711 23:07:14 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:22.711 23:07:14 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:22.711 23:07:14 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:22.711 23:07:14 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:22.711 23:07:14 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:22.711 23:07:14 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:22.711 23:07:14 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:22.711 23:07:14 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:22.711 23:07:14 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:22.711 23:07:14 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:22.711 23:07:14 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:22.711 23:07:15 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:22.711 /tmp/:spdk-test:key0 00:37:22.711 23:07:15 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:22.711 23:07:15 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:22.711 23:07:15 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:22.711 23:07:15 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:22.711 23:07:15 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:22.711 23:07:15 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:22.711 23:07:15 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:22.711 23:07:15 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:22.711 23:07:15 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:22.711 23:07:15 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:22.711 23:07:15 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:22.711 23:07:15 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:22.711 23:07:15 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:22.711 23:07:15 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:22.711 23:07:15 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:22.711 /tmp/:spdk-test:key1 00:37:22.711 23:07:15 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3727993 00:37:22.711 23:07:15 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:22.711 23:07:15 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3727993 00:37:22.711 23:07:15 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 3727993 ']' 00:37:22.711 23:07:15 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:22.711 23:07:15 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:22.711 23:07:15 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:22.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:22.711 23:07:15 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:22.711 23:07:15 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:22.711 [2024-07-26 23:07:15.084675] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:37:22.711 [2024-07-26 23:07:15.084766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3727993 ] 00:37:22.711 EAL: No free 2048 kB hugepages reported on node 1 00:37:22.711 [2024-07-26 23:07:15.145561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:22.969 [2024-07-26 23:07:15.232634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:22.969 23:07:15 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:22.969 23:07:15 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:37:22.969 23:07:15 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:22.969 23:07:15 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.969 23:07:15 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:22.969 [2024-07-26 23:07:15.472091] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:23.227 null0 00:37:23.227 [2024-07-26 23:07:15.504154] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:23.227 [2024-07-26 23:07:15.504649] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:23.227 23:07:15 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:23.227 23:07:15 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:23.227 326636059 00:37:23.227 23:07:15 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:23.227 901437072 00:37:23.227 23:07:15 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3728072 00:37:23.227 23:07:15 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:23.227 23:07:15 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3728072 /var/tmp/bperf.sock 00:37:23.227 23:07:15 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 3728072 ']' 00:37:23.227 23:07:15 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:23.227 23:07:15 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:23.227 23:07:15 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:23.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:23.227 23:07:15 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:23.227 23:07:15 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:23.227 [2024-07-26 23:07:15.569246] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:37:23.227 [2024-07-26 23:07:15.569312] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3728072 ] 00:37:23.227 EAL: No free 2048 kB hugepages reported on node 1 00:37:23.227 [2024-07-26 23:07:15.631242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:23.227 [2024-07-26 23:07:15.722800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:23.484 23:07:15 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:23.484 23:07:15 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:37:23.484 23:07:15 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:23.484 23:07:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:23.742 23:07:16 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:23.742 23:07:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:24.000 23:07:16 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:24.000 23:07:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:24.260 [2024-07-26 23:07:16.568095] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:24.260 nvme0n1 00:37:24.260 23:07:16 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:24.260 23:07:16 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:24.260 23:07:16 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:24.260 23:07:16 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:24.260 23:07:16 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:24.260 23:07:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:24.549 23:07:16 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:24.549 23:07:16 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:24.549 23:07:16 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:24.549 23:07:16 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:24.549 23:07:16 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:24.549 23:07:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:24.549 23:07:16 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:24.809 23:07:17 keyring_linux -- keyring/linux.sh@25 -- # sn=326636059 00:37:24.809 23:07:17 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:24.809 23:07:17 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:24.809 23:07:17 keyring_linux -- keyring/linux.sh@26 -- # [[ 326636059 == \3\2\6\6\3\6\0\5\9 ]] 00:37:24.809 23:07:17 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 326636059 00:37:24.809 23:07:17 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:24.809 23:07:17 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:24.809 Running I/O for 1 seconds... 00:37:26.186 00:37:26.186 Latency(us) 00:37:26.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:26.186 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:26.186 nvme0n1 : 1.03 3934.37 15.37 0.00 0.00 32148.55 9806.13 41943.04 00:37:26.186 =================================================================================================================== 00:37:26.186 Total : 3934.37 15.37 0.00 0.00 32148.55 9806.13 41943.04 00:37:26.186 0 00:37:26.186 23:07:18 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:26.186 23:07:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:26.186 23:07:18 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:26.186 23:07:18 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:26.186 23:07:18 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:26.186 23:07:18 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:26.186 23:07:18 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:26.186 23:07:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:26.444 23:07:18 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:26.444 23:07:18 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:26.444 23:07:18 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:26.444 23:07:18 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:26.444 23:07:18 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:37:26.444 23:07:18 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:26.444 23:07:18 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:26.444 23:07:18 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:26.444 23:07:18 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:26.444 23:07:18 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:26.444 23:07:18 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:26.444 23:07:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:26.703 [2024-07-26 23:07:19.068253] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:26.703 [2024-07-26 23:07:19.068779] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e5270 (107): Transport endpoint is not connected 00:37:26.703 [2024-07-26 23:07:19.069769] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e5270 (9): Bad file descriptor 00:37:26.703 [2024-07-26 23:07:19.070768] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:26.703 [2024-07-26 23:07:19.070787] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:26.703 [2024-07-26 23:07:19.070801] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:26.703 request: 00:37:26.703 { 00:37:26.703 "name": "nvme0", 00:37:26.703 "trtype": "tcp", 00:37:26.703 "traddr": "127.0.0.1", 00:37:26.703 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:26.703 "adrfam": "ipv4", 00:37:26.703 "trsvcid": "4420", 00:37:26.703 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:26.703 "psk": ":spdk-test:key1", 00:37:26.703 "method": "bdev_nvme_attach_controller", 00:37:26.703 "req_id": 1 00:37:26.703 } 00:37:26.703 Got JSON-RPC error response 00:37:26.703 response: 00:37:26.703 { 00:37:26.703 "code": -5, 00:37:26.703 "message": "Input/output error" 00:37:26.703 } 00:37:26.703 23:07:19 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:37:26.703 23:07:19 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:26.703 23:07:19 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:26.703 23:07:19 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:26.703 23:07:19 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:26.703 23:07:19 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:26.703 23:07:19 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:26.703 23:07:19 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:26.703 23:07:19 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:26.703 23:07:19 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:26.703 23:07:19 keyring_linux -- keyring/linux.sh@33 -- # sn=326636059 00:37:26.703 23:07:19 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 326636059 00:37:26.703 1 links removed 00:37:26.703 23:07:19 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:26.703 23:07:19 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:26.703 23:07:19 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:26.703 23:07:19 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:26.703 23:07:19 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:26.703 23:07:19 keyring_linux -- keyring/linux.sh@33 -- # sn=901437072 00:37:26.703 23:07:19 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 901437072 00:37:26.703 1 links removed 00:37:26.703 23:07:19 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3728072 00:37:26.703 23:07:19 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 3728072 ']' 00:37:26.703 23:07:19 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 3728072 00:37:26.703 23:07:19 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:37:26.703 23:07:19 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:26.703 23:07:19 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3728072 00:37:26.703 23:07:19 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:26.703 23:07:19 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:26.703 23:07:19 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3728072' 00:37:26.703 killing process with pid 3728072 00:37:26.703 23:07:19 keyring_linux -- common/autotest_common.sh@965 -- # kill 3728072 00:37:26.703 Received shutdown signal, test time was about 1.000000 seconds 00:37:26.703 00:37:26.703 Latency(us) 00:37:26.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:26.703 =================================================================================================================== 00:37:26.703 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:26.703 23:07:19 keyring_linux -- common/autotest_common.sh@970 -- # wait 3728072 00:37:26.962 23:07:19 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3727993 00:37:26.962 23:07:19 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 3727993 ']' 00:37:26.962 23:07:19 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 3727993 00:37:26.962 23:07:19 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:37:26.962 23:07:19 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:26.962 23:07:19 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3727993 00:37:26.962 23:07:19 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:26.962 23:07:19 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:26.962 23:07:19 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3727993' 00:37:26.962 killing process with pid 3727993 00:37:26.962 23:07:19 keyring_linux -- common/autotest_common.sh@965 -- # kill 3727993 00:37:26.962 23:07:19 keyring_linux -- common/autotest_common.sh@970 -- # wait 3727993 00:37:27.531 00:37:27.531 real 0m4.869s 00:37:27.531 user 0m9.111s 00:37:27.531 sys 0m1.487s 00:37:27.531 23:07:19 keyring_linux -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:27.531 23:07:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:27.531 ************************************ 00:37:27.531 END TEST keyring_linux 00:37:27.531 ************************************ 00:37:27.531 23:07:19 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:37:27.531 23:07:19 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:37:27.531 23:07:19 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:37:27.531 23:07:19 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:37:27.531 23:07:19 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:37:27.531 23:07:19 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:37:27.531 23:07:19 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:37:27.531 23:07:19 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:37:27.531 23:07:19 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:37:27.531 23:07:19 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:37:27.531 23:07:19 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:37:27.531 23:07:19 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:37:27.531 23:07:19 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:37:27.531 23:07:19 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:37:27.531 23:07:19 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:37:27.531 23:07:19 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:37:27.531 23:07:19 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:37:27.531 23:07:19 -- common/autotest_common.sh@720 -- # xtrace_disable 00:37:27.531 23:07:19 -- common/autotest_common.sh@10 -- # set +x 00:37:27.531 23:07:19 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:37:27.531 23:07:19 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:37:27.531 23:07:19 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:37:27.531 23:07:19 -- common/autotest_common.sh@10 -- # set +x 00:37:29.438 INFO: APP EXITING 00:37:29.438 INFO: killing all VMs 00:37:29.438 INFO: killing vhost app 00:37:29.438 INFO: EXIT DONE 00:37:30.373 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:37:30.373 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:37:30.373 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:37:30.373 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:37:30.373 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:37:30.373 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:37:30.373 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:37:30.373 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:37:30.373 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:37:30.373 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:37:30.373 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:37:30.373 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:37:30.373 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:37:30.373 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:37:30.373 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:37:30.373 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:37:30.373 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:37:31.752 Cleaning 00:37:31.752 Removing: /var/run/dpdk/spdk0/config 00:37:31.752 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:31.752 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:31.752 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:31.752 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:31.752 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:31.752 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:31.752 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:31.752 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:31.752 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:31.752 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:31.752 Removing: /var/run/dpdk/spdk1/config 00:37:31.752 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:31.752 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:31.752 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:31.752 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:31.752 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:31.752 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:31.752 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:31.752 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:31.752 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:31.752 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:31.752 Removing: /var/run/dpdk/spdk1/mp_socket 00:37:31.752 Removing: /var/run/dpdk/spdk2/config 00:37:31.752 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:31.752 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:31.752 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:31.752 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:31.752 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:31.752 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:31.752 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:31.752 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:31.752 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:31.752 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:31.752 Removing: /var/run/dpdk/spdk3/config 00:37:31.752 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:31.752 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:31.752 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:31.752 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:31.752 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:31.752 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:31.752 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:31.752 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:31.752 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:31.752 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:31.752 Removing: /var/run/dpdk/spdk4/config 00:37:31.752 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:31.752 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:31.752 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:31.752 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:31.752 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:31.752 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:31.752 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:31.752 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:31.752 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:31.752 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:31.752 Removing: /dev/shm/bdev_svc_trace.1 00:37:31.752 Removing: /dev/shm/nvmf_trace.0 00:37:31.752 Removing: /dev/shm/spdk_tgt_trace.pid3408373 00:37:31.752 Removing: /var/run/dpdk/spdk0 00:37:31.752 Removing: /var/run/dpdk/spdk1 00:37:31.752 Removing: /var/run/dpdk/spdk2 00:37:31.752 Removing: /var/run/dpdk/spdk3 00:37:31.752 Removing: /var/run/dpdk/spdk4 00:37:31.752 Removing: /var/run/dpdk/spdk_pid3406828 00:37:31.752 Removing: /var/run/dpdk/spdk_pid3407557 00:37:31.752 Removing: /var/run/dpdk/spdk_pid3408373 00:37:31.752 Removing: /var/run/dpdk/spdk_pid3408811 00:37:31.752 Removing: /var/run/dpdk/spdk_pid3409498 00:37:31.752 Removing: /var/run/dpdk/spdk_pid3409642 00:37:31.752 Removing: /var/run/dpdk/spdk_pid3410442 00:37:31.752 Removing: /var/run/dpdk/spdk_pid3410479 00:37:31.752 Removing: /var/run/dpdk/spdk_pid3410721 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3412422 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3413324 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3413638 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3413824 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3414026 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3414215 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3414370 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3414530 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3414708 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3415289 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3417638 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3417800 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3417962 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3417988 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3418394 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3418410 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3418836 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3418843 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3419134 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3419139 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3419303 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3419438 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3419802 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3419960 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3420153 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3420321 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3420370 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3420533 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3420686 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3420949 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3421122 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3421281 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3421434 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3421711 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3421869 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3422020 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3422278 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3422468 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3422628 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3422779 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3423058 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3423216 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3423369 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3423569 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3423805 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3423970 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3424123 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3424398 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3424467 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3424673 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3426731 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3480273 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3482768 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3489594 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3492882 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3495241 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3495763 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3503614 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3503616 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3504156 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3504809 00:37:31.753 Removing: /var/run/dpdk/spdk_pid3505469 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3505865 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3505873 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3506009 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3506146 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3506148 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3506804 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3507460 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3507999 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3508529 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3508531 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3508789 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3509663 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3510384 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3515731 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3515891 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3518389 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3522080 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3524263 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3530507 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3536204 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3537511 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3538170 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3548220 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3550315 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3575562 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3578337 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3579514 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3580824 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3580858 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3580985 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3581118 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3581436 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3582745 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3583463 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3583773 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3585381 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3585808 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3586364 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3588784 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3592624 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3596148 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3619666 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3622315 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3626087 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3627031 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3628109 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3630657 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3633009 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3637090 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3637215 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3639978 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3640114 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3640250 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3640517 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3640641 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3641708 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3642898 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3644072 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3645250 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3646426 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3647612 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3652026 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3652355 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3653679 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3654383 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3658082 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3660048 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3663337 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3666650 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3672862 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3677211 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3677219 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3690017 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3690425 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3690884 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3691360 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3691931 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3692338 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3692748 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3693152 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3695534 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3695790 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3699572 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3699625 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3701347 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3706255 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3706260 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3709145 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3710422 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3711838 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3712685 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3714148 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3715068 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3720774 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3721113 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3721501 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3723051 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3723346 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3723734 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3726165 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3726179 00:37:32.012 Removing: /var/run/dpdk/spdk_pid3727638 00:37:32.270 Removing: /var/run/dpdk/spdk_pid3727993 00:37:32.270 Removing: /var/run/dpdk/spdk_pid3728072 00:37:32.270 Clean 00:37:32.270 23:07:24 -- common/autotest_common.sh@1447 -- # return 0 00:37:32.270 23:07:24 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:37:32.270 23:07:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:32.270 23:07:24 -- common/autotest_common.sh@10 -- # set +x 00:37:32.271 23:07:24 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:37:32.271 23:07:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:32.271 23:07:24 -- common/autotest_common.sh@10 -- # set +x 00:37:32.271 23:07:24 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:32.271 23:07:24 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:32.271 23:07:24 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:32.271 23:07:24 -- spdk/autotest.sh@391 -- # hash lcov 00:37:32.271 23:07:24 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:32.271 23:07:24 -- spdk/autotest.sh@393 -- # hostname 00:37:32.271 23:07:24 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:32.528 geninfo: WARNING: invalid characters removed from testname! 00:38:04.590 23:07:52 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:04.590 23:07:56 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:07.118 23:07:59 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:10.446 23:08:02 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:13.735 23:08:05 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:17.014 23:08:08 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:19.541 23:08:11 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:19.541 23:08:11 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:19.541 23:08:11 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:38:19.541 23:08:11 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:19.541 23:08:11 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:19.541 23:08:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.541 23:08:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.541 23:08:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.541 23:08:11 -- paths/export.sh@5 -- $ export PATH 00:38:19.541 23:08:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.541 23:08:11 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:38:19.541 23:08:11 -- common/autobuild_common.sh@440 -- $ date +%s 00:38:19.541 23:08:11 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1722028091.XXXXXX 00:38:19.541 23:08:11 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1722028091.hDkGLw 00:38:19.541 23:08:11 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:38:19.541 23:08:11 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:38:19.541 23:08:11 -- common/autobuild_common.sh@447 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:38:19.541 23:08:11 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:38:19.541 23:08:11 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:38:19.541 23:08:11 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:38:19.541 23:08:11 -- common/autobuild_common.sh@456 -- $ get_config_params 00:38:19.541 23:08:11 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:38:19.541 23:08:11 -- common/autotest_common.sh@10 -- $ set +x 00:38:19.541 23:08:11 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:38:19.541 23:08:11 -- common/autobuild_common.sh@458 -- $ start_monitor_resources 00:38:19.541 23:08:11 -- pm/common@17 -- $ local monitor 00:38:19.541 23:08:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:19.541 23:08:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:19.541 23:08:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:19.541 23:08:11 -- pm/common@21 -- $ date +%s 00:38:19.541 23:08:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:19.541 23:08:11 -- pm/common@21 -- $ date +%s 00:38:19.541 23:08:11 -- pm/common@25 -- $ sleep 1 00:38:19.541 23:08:11 -- pm/common@21 -- $ date +%s 00:38:19.541 23:08:11 -- pm/common@21 -- $ date +%s 00:38:19.541 23:08:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1722028091 00:38:19.541 23:08:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1722028091 00:38:19.541 23:08:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1722028091 00:38:19.541 23:08:11 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1722028091 00:38:19.541 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1722028091_collect-vmstat.pm.log 00:38:19.541 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1722028091_collect-cpu-load.pm.log 00:38:19.541 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1722028091_collect-cpu-temp.pm.log 00:38:19.541 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1722028091_collect-bmc-pm.bmc.pm.log 00:38:20.480 23:08:12 -- common/autobuild_common.sh@459 -- $ trap stop_monitor_resources EXIT 00:38:20.480 23:08:12 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:38:20.480 23:08:12 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:20.480 23:08:12 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:38:20.480 23:08:12 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:38:20.480 23:08:12 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:38:20.480 23:08:12 -- spdk/autopackage.sh@19 -- $ timing_finish 00:38:20.480 23:08:12 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:20.480 23:08:12 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:38:20.480 23:08:12 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:20.480 23:08:12 -- spdk/autopackage.sh@20 -- $ exit 0 00:38:20.480 23:08:12 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:38:20.480 23:08:12 -- pm/common@29 -- $ signal_monitor_resources TERM 00:38:20.480 23:08:12 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:38:20.480 23:08:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:20.480 23:08:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:38:20.480 23:08:12 -- pm/common@44 -- $ pid=3739250 00:38:20.480 23:08:12 -- pm/common@50 -- $ kill -TERM 3739250 00:38:20.480 23:08:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:20.480 23:08:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:38:20.480 23:08:12 -- pm/common@44 -- $ pid=3739252 00:38:20.480 23:08:12 -- pm/common@50 -- $ kill -TERM 3739252 00:38:20.480 23:08:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:20.480 23:08:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:38:20.480 23:08:12 -- pm/common@44 -- $ pid=3739253 00:38:20.480 23:08:12 -- pm/common@50 -- $ kill -TERM 3739253 00:38:20.480 23:08:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:20.480 23:08:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:38:20.480 23:08:12 -- pm/common@44 -- $ pid=3739285 00:38:20.480 23:08:12 -- pm/common@50 -- $ sudo -E kill -TERM 3739285 00:38:20.480 + [[ -n 3302749 ]] 00:38:20.480 + sudo kill 3302749 00:38:20.489 [Pipeline] } 00:38:20.507 [Pipeline] // stage 00:38:20.513 [Pipeline] } 00:38:20.530 [Pipeline] // timeout 00:38:20.536 [Pipeline] } 00:38:20.552 [Pipeline] // catchError 00:38:20.558 [Pipeline] } 00:38:20.576 [Pipeline] // wrap 00:38:20.583 [Pipeline] } 00:38:20.599 [Pipeline] // catchError 00:38:20.609 [Pipeline] stage 00:38:20.611 [Pipeline] { (Epilogue) 00:38:20.627 [Pipeline] catchError 00:38:20.629 [Pipeline] { 00:38:20.644 [Pipeline] echo 00:38:20.646 Cleanup processes 00:38:20.652 [Pipeline] sh 00:38:20.936 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:20.936 3739391 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:38:20.936 3739516 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:20.949 [Pipeline] sh 00:38:21.233 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:21.233 ++ grep -v 'sudo pgrep' 00:38:21.233 ++ awk '{print $1}' 00:38:21.233 + sudo kill -9 3739391 00:38:21.245 [Pipeline] sh 00:38:21.528 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:31.504 [Pipeline] sh 00:38:31.781 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:31.782 Artifacts sizes are good 00:38:31.795 [Pipeline] archiveArtifacts 00:38:31.801 Archiving artifacts 00:38:32.026 [Pipeline] sh 00:38:32.308 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:32.324 [Pipeline] cleanWs 00:38:32.335 [WS-CLEANUP] Deleting project workspace... 00:38:32.335 [WS-CLEANUP] Deferred wipeout is used... 00:38:32.343 [WS-CLEANUP] done 00:38:32.344 [Pipeline] } 00:38:32.365 [Pipeline] // catchError 00:38:32.378 [Pipeline] sh 00:38:32.661 + logger -p user.info -t JENKINS-CI 00:38:32.670 [Pipeline] } 00:38:32.686 [Pipeline] // stage 00:38:32.691 [Pipeline] } 00:38:32.708 [Pipeline] // node 00:38:32.714 [Pipeline] End of Pipeline 00:38:32.764 Finished: SUCCESS